Parallelization of PANDA discrete ordinates code using spatial decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.
2006-07-01
We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.
An MPI + $X$ implementation of contact global search using Kokkos
Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...
2015-10-05
This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Domain decomposition by the advancing-partition method for parallel unstructured grid generation
NASA Technical Reports Server (NTRS)
Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)
2012-01-01
In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
Mechanical and Assembly Units of Viral Capsids Identified via Quasi-Rigid Domain Decomposition
Polles, Guido; Indelicato, Giuliana; Potestio, Raffaello; Cermelli, Paolo; Twarock, Reidun; Micheletti, Cristian
2013-01-01
Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV) for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available. PMID:24244139
Planning paths through a spatial hierarchy - Eliminating stair-stepping effects
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1989-01-01
Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
Spatially coupled catalytic ignition of CO oxidation on Pt: mesoscopic versus nano-scale
Spiel, C.; Vogel, D.; Schlögl, R.; Rupprechter, G.; Suchorski, Y.
2015-01-01
Spatial coupling during catalytic ignition of CO oxidation on μm-sized Pt(hkl) domains of a polycrystalline Pt foil has been studied in situ by PEEM (photoemission electron microscopy) in the 10−5 mbar pressure range. The same reaction has been examined under similar conditions by FIM (field ion microscopy) on nm-sized Pt(hkl) facets of a Pt nanotip. Proper orthogonal decomposition (POD) of the digitized FIM images has been employed to analyze spatiotemporal dynamics of catalytic ignition. The results show the essential role of the sample size and of the morphology of the domain (facet) boundary in the spatial coupling in CO oxidation. PMID:26021411
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Vincenti, H.; Vay, J. -L.
2015-11-22
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
NASA Astrophysics Data System (ADS)
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
Pseudospectral reverse time migration based on wavefield decomposition
NASA Astrophysics Data System (ADS)
Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang
2017-05-01
The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Dynamic mode decomposition for plasma diagnostics and validation.
Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Dynamic mode decomposition for plasma diagnostics and validation
NASA Astrophysics Data System (ADS)
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coon, Ethan; Porter, Mark L.; Kang, Qinjun
2012-06-18
In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Complex mode indication function and its applications to spatial domain parameter estimation
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, Hariswaran; Grout, Ray
2015-10-30
The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas
NASA Astrophysics Data System (ADS)
Glasser, Alan H.
2008-11-01
Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.
NASA Astrophysics Data System (ADS)
Havu, Vile; Blum, Volker; Scheffler, Matthias
2007-03-01
Numeric atom-centered local orbitals (NAO) are efficient basis sets for all-electron electronic structure theory. The locality of NAO's can be exploited to render (in principle) all operations of the self-consistency cycle O(N). This is straightforward for 3D integrals using domain decomposition into spatially close subsets of integration points, enabling critical computational savings that are effective from ˜tens of atoms (no significant overhead for smaller systems) and make large systems (100s of atoms) computationally feasible. Using a new all-electron NAO-based code,^1 we investigate the quantitative impact of exploiting this locality on two distinct classes of systems: Large light-element molecules [Alanine-based polypeptide chains (Ala)n], and compact transition metal clusters. Strict NAO locality is achieved by imposing a cutoff potential with an onset radius rc, and exploited by appropriately shaped integration domains (subsets of integration points). Conventional tight rc<= 3å have no measurable accuracy impact in (Ala)n, but introduce inaccuracies of 20-30 meV/atom in Cun. The domain shape impacts the computational effort by only 10-20 % for reasonable rc. ^1 V. Blum, R. Gehrke, P. Havu, V. Havu, M. Scheffler, The FHI Ab Initio Molecular Simulations (aims) Project, Fritz-Haber-Institut, Berlin (2006).
NASA Astrophysics Data System (ADS)
Wang, Jun-Wei; Liu, Ya-Qiang; Hu, Yan-Yan; Sun, Chang-Yin
2017-12-01
This paper discusses the design problem of distributed H∞ Luenberger-type partial differential equation (PDE) observer for state estimation of a linear unstable parabolic distributed parameter system (DPS) with external disturbance and measurement disturbance. Both pointwise measurement in space and local piecewise uniform measurement in space are considered; that is, sensors are only active at some specified points or applied at part thereof of the spatial domain. The spatial domain is decomposed into multiple subdomains according to the location of the sensors such that only one sensor is located at each subdomain. By using Lyapunov technique, Wirtinger's inequality at each subdomain, and integration by parts, a Lyapunov-based design of Luenberger-type PDE observer is developed such that the resulting estimation error system is exponentially stable with an H∞ performance constraint, and presented in terms of standard linear matrix inequalities (LMIs). For the case of local piecewise uniform measurement in space, the first mean value theorem for integrals is utilised in the observer design development. Moreover, the problem of optimal H∞ observer design is also addressed in the sense of minimising the attenuation level. Numerical simulation results are presented to show the satisfactory performance of the proposed design method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
Wang, Bei; Ethier, Stephane; Tang, William; ...
2017-06-29
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less
Rank-based decompositions of morphological templates.
Sussner, P; Ritter, G X
2000-01-01
Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.
A novel spatial-temporal detection method of dim infrared moving small target
NASA Astrophysics Data System (ADS)
Chen, Zhong; Deng, Tao; Gao, Lei; Zhou, Heng; Luo, Song
2014-09-01
Moving small target detection under complex background in infrared image sequence is one of the major challenges of modern military in Early Warning Systems (EWS) and the use of Long-Range Strike (LRS). However, because of the low SNR and undulating background, the infrared moving small target detection is a difficult problem in a long time. To solve this problem, a novel spatial-temporal detection method based on bi-dimensional empirical mode decomposition (EMD) and time-domain difference is proposed in this paper. This method is downright self-data decomposition and do not rely on any transition kernel function, so it has a strong adaptive capacity. Firstly, we generalized the 1D EMD algorithm to the 2D case. In this process, the project has solved serial issues in 2D EMD, such as large amount of data operations, define and identify extrema in 2D case, and two-dimensional signal boundary corrosion. The EMD algorithm studied in this project can be well adapted to the automatic detection of small targets under low SNR and complex background. Secondly, considering the characteristics of moving target, we proposed an improved filtering method based on three-frame difference on basis of the original difference filtering in time-domain, which greatly improves the ability of anti-jamming algorithm. Finally, we proposed a new time-space fusion method based on a combined processing of 2D EMD and improved time-domain differential filtering. And, experimental results show that this method works well in infrared small moving target detection under low SNR and complex background.
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
A New Domain Decomposition Approach for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Dai, Xiaoxiao; Gao, David Wenzhong
An approach of big data characterization for smart grids (SGs) and its applications in fault detection, identification, and causal impact analysis is proposed in this paper, which aims to provide substantial data volume reduction while keeping comprehensive information from synchrophasor measurements in spatial and temporal domains. Especially, based on secondary voltage control (SVC) and local SG observation algorithm, a two-layer dynamic optimal synchrophasor measurement devices selection algorithm (OSMDSA) is proposed to determine SVC zones, their corresponding pilot buses, and the optimal synchrophasor measurement devices. Combining the two-layer dynamic OSMDSA and matching pursuit decomposition, the synchrophasor data is completely characterized inmore » the spatial-temporal domain. To demonstrate the effectiveness of the proposed characterization approach, SG situational awareness is investigated based on hidden Markov model based fault detection and identification using the spatial-temporal characteristics generated from the reduced data. To identify the major impact buses, the weighted Granger causality for SGs is proposed to investigate the causal relationship of buses during system disturbance. The IEEE 39-bus system and IEEE 118-bus system are employed to validate and evaluate the proposed approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic
Optimal domain decomposition strategies
NASA Technical Reports Server (NTRS)
Yoon, Yonghyun; Soni, Bharat K.
1995-01-01
The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.
An analysis of scatter decomposition
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a powerful mapping technique known as scatter decomposition is presented. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why, and when scatter decomposition works. The first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally it is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
Uncertainty propagation in orbital mechanics via tensor decomposition
NASA Astrophysics Data System (ADS)
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Regularization of nonlinear decomposition of spectral x-ray projection images.
Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise
2017-09-01
Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.
Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques
DOT National Transportation Integrated Search
1995-01-01
Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...
Geometrical and topological issues in octree based automatic meshing
NASA Technical Reports Server (NTRS)
Saxena, Mukul; Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
Octree based automatic meshing from CSG models
NASA Technical Reports Server (NTRS)
Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is emphasized. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary respresentation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractors. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
NASA Astrophysics Data System (ADS)
Ikeda, Hayato; Nagaoka, Ryo; Lafond, Maxime; Yoshizawa, Shin; Iwasaki, Ryosuke; Maeda, Moe; Umemura, Shin-ichiro; Saijo, Yoshifumi
2018-07-01
High-intensity focused ultrasound is a noninvasive treatment applied by externally irradiating ultrasound to the body to coagulate the target tissue thermally. Recently, it has been proposed as a noninvasive treatment for vascular occlusion to replace conventional invasive treatments. Cavitation bubbles generated by the focused ultrasound can accelerate the effect of thermal coagulation. However, the tissues surrounding the target may be damaged by cavitation bubbles generated outside the treatment area. Conventional methods based on Doppler analysis only in the time domain are not suitable for monitoring blood flow in the presence of cavitation. In this study, we proposed a novel filtering method based on the differences in spatiotemporal characteristics, to separate tissue, blood flow, and cavitation by employing singular value decomposition. Signals from cavitation and blood flow were extracted automatically using spatial and temporal covariance matrices.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Jin; Liu, Zilong
2017-12-01
Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images
Stable Scalp EEG Spatiospectral Patterns Across Paradigms Estimated by Group ICA.
Labounek, René; Bridwell, David A; Mareček, Radek; Lamoš, Martin; Mikl, Michal; Slavíček, Tomáš; Bednařík, Petr; Baštinec, Jaromír; Hluštík, Petr; Brázdil, Milan; Jan, Jiří
2018-01-01
Electroencephalography (EEG) oscillations reflect the superposition of different cortical sources with potentially different frequencies. Various blind source separation (BSS) approaches have been developed and implemented in order to decompose these oscillations, and a subset of approaches have been developed for decomposition of multi-subject data. Group independent component analysis (Group ICA) is one such approach, revealing spatiospectral maps at the group level with distinct frequency and spatial characteristics. The reproducibility of these distinct maps across subjects and paradigms is relatively unexplored domain, and the topic of the present study. To address this, we conducted separate group ICA decompositions of EEG spatiospectral patterns on data collected during three different paradigms or tasks (resting-state, semantic decision task and visual oddball task). K-means clustering analysis of back-reconstructed individual subject maps demonstrates that fourteen different independent spatiospectral maps are present across the different paradigms/tasks, i.e. they are generally stable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W; Niu, T; Xing, L
2015-06-15
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leadingmore » resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR-NLM provides an effective way to reduce the generic magnified image noise of dual–energy material decomposition while preserving resolution. This work is supported in part by NIH grants 7R01HL111141 and 1R01-EB016777. This work is also supported by the Natural Science Foundation of China (NSFC Grant No. 81201091), Fundamental Research Funds for the Central Universities in China, and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
Kannan, R; Ievlev, A V; Laanait, N; Ziatdinov, M A; Vasudevan, R K; Jesse, S; Kalinin, S V
2018-01-01
Many spectral responses in materials science, physics, and chemistry experiments can be characterized as resulting from the superposition of a number of more basic individual spectra. In this context, unmixing is defined as the problem of determining the individual spectra, given measurements of multiple spectra that are spatially resolved across samples, as well as the determination of the corresponding abundance maps indicating the local weighting of each individual spectrum. Matrix factorization is a popular linear unmixing technique that considers that the mixture model between the individual spectra and the spatial maps is linear. Here, we present a tutorial paper targeted at domain scientists to introduce linear unmixing techniques, to facilitate greater understanding of spectroscopic imaging data. We detail a matrix factorization framework that can incorporate different domain information through various parameters of the matrix factorization method. We demonstrate many domain-specific examples to explain the expressivity of the matrix factorization framework and show how the appropriate use of domain-specific constraints such as non-negativity and sum-to-one abundance result in physically meaningful spectral decompositions that are more readily interpretable. Our aim is not only to explain the off-the-shelf available tools, but to add additional constraints when ready-made algorithms are unavailable for the task. All examples use the scalable open source implementation from https://github.com/ramkikannan/nmflibrary that can run from small laptops to supercomputers, creating a user-wide platform for rapid dissemination and adoption across scientific disciplines.
A multilevel preconditioner for domain decomposition boundary systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.
1991-12-11
In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.
NASA Astrophysics Data System (ADS)
Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.
2018-01-01
In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Local dynamics and spatiotemporal chaos. The Kuramoto- Sivashinsky equation: A case study
NASA Astrophysics Data System (ADS)
Wittenberg, Ralf Werner
The nature of spatiotemporal chaos in extended continuous systems is not yet well-understood. In this thesis, a model partial differential equation, the Kuramoto- Sivashinsky (KS) equation ut+uxxxx+uxx+uux =0 on a large one-dimensional periodic domain, is studied analytically, numerically, and through modeling to obtain a more detailed understanding of the observed spatiotemporally complex dynamics. In particular, with the aid of a wavelet decomposition, the relevant dynamical interactions are shown to be localized in space and scale. Motivated by these results, and by the idea that the attractor on a large domain may be understood via attractors on smaller domains, a spatially localized low- dimensional model for a minimal chaotic box is proposed. A (de)stabilized extension of the KS equation has recently attracted increased interest; for this situation, dissipativity and analyticity areproven, and an explicit shock-like solution is constructed which sheds light on the difficulties in obtaining optimal bounds for the KS equation. For the usual KS equation, the spatiotemporally chaotic state is carefully characterized in real, Fourier and wavelet space. The wavelet decomposition provides good scale separation which isolates the three characteristic regions of the dynamics: large scales of slow Gaussian fluctuations, active scales containing localized interactions of coherent structures, and small scales. Space localization is shown through a comparison of various correlation lengths and a numerical experiment in which different modes are uncoupled to estimate a dynamic interaction length. A detailed picture of the contributions of different scales to the spatiotemporally complex dynamics is obtained via a Galerkin projection of the KS equation onto the wavelet basis, and an extensive series of numerical experiments in which different combinations of wavelet levels are eliminated or forced. These results, and a formalism to derive an effective equation for periodized subsystems externally forced from a larger system, motivate various models for spatially localized forced systems. There is convincing evidence that short periodized systems, internally forced at the largest scales, form a minimal model for the observed extensively chaotic dynamics in larger domains.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
ERIC Educational Resources Information Center
Xu, Chang; LeFevre, Jo-Anne
2016-01-01
Are there differential benefits of training sequential number knowledge versus spatial skills for children's numerical and spatial performance? Three- to five-year-old children (N = 84) participated in 1 session of either sequential training (e.g., what comes before and after the number 5?) or non-numerical spatial training (i.e., decomposition of…
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
A Parallel Algorithm for Contact in a Finite Element Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Timothy G.
A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the samemore » processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm has been implemented as the contact capability of the ALE3D multiphysics code, and is currently in production use.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121
2012-09-15
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less
Multi-Group Maximum Entropy Model for Translational Non-Equilibrium
NASA Technical Reports Server (NTRS)
Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco
2017-01-01
The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.
Detection of small surface defects using DCT based enhancement approach in machine vision systems
NASA Astrophysics Data System (ADS)
He, Fuqiang; Wang, Wen; Chen, Zichen
2005-12-01
Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
NASA Astrophysics Data System (ADS)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains
NASA Technical Reports Server (NTRS)
Kandil, Osama A.
1998-01-01
Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
NASA Astrophysics Data System (ADS)
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
NASA Astrophysics Data System (ADS)
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Mapping to Irregular Torus Topologies and Other Techniques for Petascale Biomolecular Simulation
Phillips, James C.; Sun, Yanhua; Jain, Nikhil; Bohm, Eric J.; Kalé, Laxmikant V.
2014-01-01
Currently deployed petascale supercomputers typically use toroidal network topologies in three or more dimensions. While these networks perform well for topology-agnostic codes on a few thousand nodes, leadership machines with 20,000 nodes require topology awareness to avoid network contention for communication-intensive codes. Topology adaptation is complicated by irregular node allocation shapes and holes due to dedicated input/output nodes or hardware failure. In the context of the popular molecular dynamics program NAMD, we present methods for mapping a periodic 3-D grid of fixed-size spatial decomposition domains to 3-D Cray Gemini and 5-D IBM Blue Gene/Q toroidal networks to enable hundred-million atom full machine simulations, and to similarly partition node allocations into compact domains for smaller simulations using multiple-copy algorithms. Additional enabling techniques are discussed and performance is reported for NCSA Blue Waters, ORNL Titan, ANL Mira, TACC Stampede, and NERSC Edison. PMID:25594075
Fei-Hai Yu; Martin Schutz; Deborah S. Page-Dumroese; Bertil O. Krusi; Jakob Schneller; Otto Wildi; Anita C. Risch
2011-01-01
Tussocks of graminoids can induce spatial heterogeneity in soil properties in dry areas with discontinuous vegetation cover, but little is known about the situation in areas with continuous vegetation and no study has tested whether tussocks can induce spatial heterogeneity in litter decomposition. In a subalpine grassland in the Central Alps where vegetation cover is...
Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*
Bank, R.; Falgout, R. D.; Jones, T.; ...
2015-10-29
In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less
Burger, Stefan; Fraunholz, Thomas; Leirer, Christian; Hoppe, Ronald H W; Wixforth, Achim; Peter, Malte A; Franke, Thomas
2013-06-25
Phase decomposition in lipid membranes has been the subject of numerous investigations by both experiment and theoretical simulation, yet quantitative comparisons of the simulated data to the experimental results are rare. In this work, we present a novel way of comparing the temporal development of liquid-ordered domains obtained from numerically solving the Cahn-Hilliard equation and by inducing a phase transition in giant unilamellar vesicles (GUVs). Quantitative comparison is done by calculating the structure factor of the domain pattern. It turns out that the decomposition takes place in three distinct regimes in both experiment and simulation. These regimes are characterized by different rates of growth of the mean domain diameter, and there is quantitative agreement between experiment and simulation as to the duration of each regime and the absolute rate of growth in each regime.
Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.
Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino
2017-01-10
In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mburu, Sarah; Kolli, R. Prakash; Perea, Daniel E.
The microstructure and mechanical properties in unaged and thermally aged (at 280 °C, 320 °C, 360 °C, and 400 °C to 4300 h) CF–3 and CF–8 cast duplex stainless steels (CDSS) are investigated. The unaged CF–8 steel has Cr-rich M 23C 6 carbides located at the δ–ferrite/γ–austenite heterophase interfaces that were not observed in the CF–3 steel and this corresponds to a difference in mechanical properties. Both unaged steels exhibit incipient spinodal decomposition into Fe-rich α–domains and Cr-rich α’–domains. During aging, spinodal decomposition progresses and the mean wavelength (MW) and mean amplitude (MA) of the compositional fluctuations increase as amore » function of aging temperature. Additionally, G–phase precipitates form between the spinodal decomposition domains in CF–3 at 360 °C and 400 °C and in CF–8 at 400 °C. Finally, the microstructural evolution is correlated to changes in mechanical properties.« less
Mburu, Sarah; Kolli, R. Prakash; Perea, Daniel E.; ...
2017-03-06
The microstructure and mechanical properties in unaged and thermally aged (at 280 °C, 320 °C, 360 °C, and 400 °C to 4300 h) CF–3 and CF–8 cast duplex stainless steels (CDSS) are investigated. The unaged CF–8 steel has Cr-rich M 23C 6 carbides located at the δ–ferrite/γ–austenite heterophase interfaces that were not observed in the CF–3 steel and this corresponds to a difference in mechanical properties. Both unaged steels exhibit incipient spinodal decomposition into Fe-rich α–domains and Cr-rich α’–domains. During aging, spinodal decomposition progresses and the mean wavelength (MW) and mean amplitude (MA) of the compositional fluctuations increase as amore » function of aging temperature. Additionally, G–phase precipitates form between the spinodal decomposition domains in CF–3 at 360 °C and 400 °C and in CF–8 at 400 °C. Finally, the microstructural evolution is correlated to changes in mechanical properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mburu, Sarah; Kolli, R. Prakash; Perea, Daniel E.
The microstructure and mechanical properties in unaged and thermally aged (at 280 oC, 320 oC, 360 oC, and 400 oC to 4300 h) CF–3 and CF–8 cast duplex stainless steels (CDSS) are investigated. The unaged CF–8 steel has Cr-rich M23C6 carbides located at the δ–ferrite/γ– austenite heterophase interfaces that were not observed in the CF–3 steel and this corresponds to a difference in mechanical properties. Both unaged steels exhibit incipient spinodal decomposition into Fe-rich α–domains and Cr-rich α’–domains. During aging, spinodal decomposition progresses and the mean wavelength (MW) and mean amplitude (MA) of the compositional fluctuations increase as a functionmore » of aging temperature. Additionally, G–phase precipitates form between the spinodal decomposition domains in CF–3 at 360 oC and 400 oC and in CF–8 at 400 oC. The microstructural evolution is correlated to changes in mechanical properties.« less
NASA Astrophysics Data System (ADS)
Godoy, William F.; DesJardin, Paul E.
2010-05-01
The application of flux limiters to the discrete ordinates method (DOM), SN, for radiative transfer calculations is discussed and analyzed for 3D enclosures for cases in which the intensities are strongly coupled to each other such as: radiative equilibrium and scattering media. A Newton-Krylov iterative method (GMRES) solves the final systems of linear equations along with a domain decomposition strategy for parallel computation using message passing libraries in a distributed memory system. Ray effects due to angular discretization and errors due to domain decomposition are minimized until small variations are introduced by these effects in order to focus on the influence of flux limiters on errors due to spatial discretization, known as numerical diffusion, smearing or false scattering. Results are presented for the DOM-integrated quantities such as heat flux, irradiation and emission. A variety of flux limiters are compared to "exact" solutions available in the literature, such as the integral solution of the RTE for pure absorbing-emitting media and isotropic scattering cases and a Monte Carlo solution for a forward scattering case. Additionally, a non-homogeneous 3D enclosure is included to extend the use of flux limiters to more practical cases. The overall balance of convergence, accuracy, speed and stability using flux limiters is shown to be superior compared to step schemes for any test case.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.
Tucker-Drob, Elliot M.; Reynolds, Chandra A.; Finkel, Deborah; Pedersen, Nancy L.
2013-01-01
Aging-related declines occur in many different domains of cognitive function during later adulthood. However, whether a global dimension underlies individual differences in changes in different domains of cognition, and whether global genetic influences on cognitive changes exist, is less clear. We addressed these issues by applying multivariate growth curve models to longitudinal data from 857 individuals from the Swedish Adoption/Twin Study of Aging, who had been measured on 11 cognitive variables representative of verbal, spatial, memory, and processing speed abilities up to 5 times over up to 16 years between ages 50 and 96 years. Between ages 50 and 65 years scores on different tests changed relatively independently of one another, and there was little evidence for strong underlying dimensions of change. In contrast, over the period between 65 and 96 years of age, there were strong interrelations among rates of change both within and across domains. During this age period, variability in rates of change were, on average, 52% domain-general, 8% domain-specific, and 39% test specific. Quantitative genetic decomposition indicated that 29% of individual differences in a global domain-general dimension of cognitive changes from 65 to 96 years were attributable to genetic influences, but some domain-specific genetic influences were also evident, even after accounting for domain-general contributions. These findings are consistent with a balanced global and domain-specific account of the genetics of cognitive aging. PMID:23586942
Tucker-Drob, Elliot M; Reynolds, Chandra A; Finkel, Deborah; Pedersen, Nancy L
2014-01-01
Aging-related declines occur in many different domains of cognitive function during middle and late adulthood. However, whether a global dimension underlies individual differences in changes in different domains of cognition and whether global genetic influences on cognitive changes exist is less clear. We addressed these issues by applying multivariate growth curve models to longitudinal data from 857 individuals from the Swedish Adoption/Twin Study of Aging, who had been measured on 11 cognitive variables representative of verbal, spatial, memory, and processing speed abilities up to 5 times over up to 16 years between ages 50 and 96 years. Between ages 50 and 65 years scores on different tests changed relatively independently of one another, and there was little evidence for strong underlying dimensions of change. In contrast, over the period between 65 and 96 years of age, there were strong interrelations among rates of change both within and across domains. During this age period, variability in rates of change were, on average, 52% domain-general, 8% domain-specific, and 39% test-specific. Quantitative genetic decomposition indicated that 29% of individual differences in a global domain-general dimension of cognitive changes during this age period were attributable to genetic influences, but some domain-specific genetic influences were also evident, even after accounting for domain-general contributions. These findings are consistent with a balanced global and domain-specific account of the genetics of cognitive aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou
2017-01-01
In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.
An asymptotic induced numerical method for the convection-diffusion-reaction equation
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.; Sorensen, Danny C.
1988-01-01
A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.
Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Cheatwood, F. McNeil
1997-01-01
The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
NASA Astrophysics Data System (ADS)
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
The Multiscale Robin Coupled Method for flows in porous media
NASA Astrophysics Data System (ADS)
Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.
2018-02-01
A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.
Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea
NASA Astrophysics Data System (ADS)
Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju
2014-08-01
A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.
Statistical image-domain multimaterial decomposition for dual-energy CT.
Xue, Yi; Ruan, Ruoshui; Hu, Xiuhua; Kuang, Yu; Wang, Jing; Long, Yong; Niu, Tianye
2017-03-01
Dual-energy CT (DECT) enhances tissue characterization because of its basis material decomposition capability. In addition to conventional two-material decomposition from DECT measurements, multimaterial decomposition (MMD) is required in many clinical applications. To solve the ill-posed problem of reconstructing multi-material images from dual-energy measurements, additional constraints are incorporated into the formulation, including volume and mass conservation and the assumptions that there are at most three materials in each pixel and various material types among pixels. The recently proposed flexible image-domain MMD method decomposes pixels sequentially into multiple basis materials using a direct inversion scheme which leads to magnified noise in the material images. In this paper, we propose a statistical image-domain MMD method for DECT to suppress the noise. The proposed method applies penalized weighted least-square (PWLS) reconstruction with a negative log-likelihood term and edge-preserving regularization for each material. The statistical weight is determined by a data-based method accounting for the noise variance of high- and low-energy CT images. We apply the optimization transfer principles to design a serial of pixel-wise separable quadratic surrogates (PWSQS) functions which monotonically decrease the cost function. The separability in each pixel enables the simultaneous update of all pixels. The proposed method is evaluated on a digital phantom, Catphan©600 phantom and three patients (pelvis, head, and thigh). We also implement the direct inversion and low-pass filtration methods for a comparison purpose. Compared with the direct inversion method, the proposed method reduces noise standard deviation (STD) in soft tissue by 95.35% in the digital phantom study, by 88.01% in the Catphan©600 phantom study, by 92.45% in the pelvis patient study, by 60.21% in the head patient study, and by 81.22% in the thigh patient study, respectively. The overall volume fraction accuracy is improved by around 6.85%. Compared with the low-pass filtration method, the root-mean-square percentage error (RMSE(%)) of electron densities in the Catphan©600 phantom is decreased by 20.89%. As modulation transfer function (MTF) magnitude decreased to 50%, the proposed method increases the spatial resolution by an overall factor of 1.64 on the digital phantom, and 2.16 on the Catphan©600 phantom. The overall volume fraction accuracy is increased by 6.15%. We proposed a statistical image-domain MMD method using DECT measurements. The method successfully suppresses the magnified noise while faithfully retaining the quantification accuracy and anatomical structure in the decomposed material images. The proposed method is practical and promising for advanced clinical applications using DECT imaging. © 2017 American Association of Physicists in Medicine.
Domain Decomposition with Local Mesh Refinement.
1989-08-01
smoothi coefficients, or non-smooth solui ioni,. We eiriplov fromn 1 to 1024 tiles on problems containing irp to 161K (degrees of freedom. Though io... methodology survives such compromises and is even sequentially advantageous in many problems. The domain decomposition algorithms we employ (sertiun 3...iog( I + !J2 it - g i Ol Qunit squiare 1 he (,mai oive i> Hie outward normal. lfie sevoh iih exam pie, from [1. 27] has a smoothi solution, but rapidlY
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Assessment of swirl spray interaction in lab scale combustor using time-resolved measurements
NASA Astrophysics Data System (ADS)
Rajamanickam, Kuppuraj; Jain, Manish; Basu, Saptarshi
2017-11-01
Liquid fuel injection in highly turbulent swirling flows becomes common practice in gas turbine combustors to improve the flame stabilization. It is well known that the vortex bubble breakdown (VBB) phenomenon in strong swirling jets exhibits complicated flow structures in the spatial domain. In this study, the interaction of hollow cone liquid sheet with such coaxial swirling flow field has been studied experimentally using time-resolved measurements. In particular, much attention is focused towards the near field breakup mechanism (i.e. primary atomization) of liquid sheet. The detailed swirling gas flow field characterization is carried out using time-resolved PIV ( 3.5 kHz). Furthermore, the complicated breakup mechanisms and interaction of the liquid sheet are imaged with the help of high-speed shadow imaging system. Subsequently, proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) is implemented over the instantaneous data sets to retrieve the modal information associated with the interaction dynamics. This helps to delineate more quantitative nature of interaction process between the liquid sheet and swirling gas phase flow field.
NASA Astrophysics Data System (ADS)
Gatto, Paolo; Lipparini, Filippo; Stamm, Benjamin
2017-12-01
The domain-decomposition (dd) paradigm, originally introduced for the conductor-like screening model, has been recently extended to the dielectric Polarizable Continuum Model (PCM), resulting in the ddPCM method. We present here a complete derivation of the analytical derivatives of the ddPCM energy with respect to the positions of the solute's atoms and discuss their efficient implementation. As it is the case for the energy, we observe a quadratic scaling, which is discussed and demonstrated with numerical tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Vay, J. -L.
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Components for Atomistic-to-Continuum Multiscale Modeling of Flow in Micro- and Nanofluidic Systems
Adalsteinsson, Helgi; Debusschere, Bert J.; Long, Kevin R.; ...
2008-01-01
Micro- and nanofluidics pose a series of significant challenges for science-based modeling. Key among those are the wide separation of length- and timescales between interface phenomena and bulk flow and the spatially heterogeneous solution properties near solid-liquid interfaces. It is not uncommon for characteristic scales in these systems to span nine orders of magnitude from the atomic motions in particle dynamics up to evolution of mass transport at the macroscale level, making explicit particle models intractable for all but the simplest systems. Recently, atomistic-to-continuum (A2C) multiscale simulations have gained a lot of interest as an approach to rigorously handle particle-levelmore » dynamics while also tracking evolution of large-scale macroscale behavior. While these methods are clearly not applicable to all classes of simulations, they are finding traction in systems in which tight-binding, and physically important, dynamics at system interfaces have complex effects on the slower-evolving large-scale evolution of the surrounding medium. These conditions allow decomposition of the simulation into discrete domains, either spatially or temporally. In this paper, we describe how features of domain decomposed simulation systems can be harnessed to yield flexible and efficient software for multiscale simulations of electric field-driven micro- and nanofluidics.« less
Ficken, Cari D; Wright, Justin P
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system.
Wright, Justin P.
2017-01-01
Litter quality and soil environmental conditions are well-studied drivers influencing decomposition rates, but the role played by disturbance legacy, such as fire history, in mediating these drivers is not well understood. Fire history may impact decomposition directly, through changes in soil conditions that impact microbial function, or indirectly, through shifts in plant community composition and litter chemistry. Here, we compared early-stage decomposition rates across longleaf pine forest blocks managed with varying fire frequencies (annual burns, triennial burns, fire-suppression). Using a reciprocal transplant design, we examined how litter chemistry and soil characteristics independently and jointly influenced litter decomposition. We found that both litter chemistry and soil environmental conditions influenced decomposition rates, but only the former was affected by historical fire frequency. Litter from annually burned sites had higher nitrogen content than litter from triennially burned and fire suppression sites, but this was correlated with only a modest increase in decomposition rates. Soil environmental conditions had a larger impact on decomposition than litter chemistry. Across the landscape, decomposition differed more along soil moisture gradients than across fire management regimes. These findings suggest that fire frequency has a limited effect on litter decomposition in this ecosystem, and encourage extending current decomposition frameworks into disturbed systems. However, litter from different species lost different masses due to fire, suggesting that fire may impact decomposition through the preferential combustion of some litter types. Overall, our findings also emphasize the important role of spatial variability in soil environmental conditions, which may be tied to fire frequency across large spatial scales, in driving decomposition rates in this system. PMID:29023560
Performance evaluation of canny edge detection on a tiled multicore architecture
NASA Astrophysics Data System (ADS)
Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald
2011-01-01
In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.
Graph Frequency Analysis of Brain Signals
Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro
2016-01-01
This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2014-12-25
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250-500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
Coherence and dimensionality of intense spatiospectral twin beams
NASA Astrophysics Data System (ADS)
Peřina, Jan
2015-07-01
Spatiospectral properties of twin beams at their transition from low to high intensities are analyzed in parametric and paraxial approximations using decomposition into paired spatial and spectral modes. Intensity auto- and cross-correlation functions are determined and compared in the spectral and temporal domains as well as the transverse wave-vector and crystal output planes. Whereas the spectral, temporal, and transverse wave-vector coherence increases with the increasing pump intensity, coherence in the crystal output plane is almost independent of the pump intensity owing to the mode structure in this plane. The corresponding auto- and cross-correlation functions approach each other for larger pump intensities. The entanglement dimensionality of a twin beam is determined with a comparison of several approaches.
NASA Astrophysics Data System (ADS)
Forsén, R.; Ghafoor, N.; Odén, M.
2013-12-01
A concept to improve hardness and thermal stability of unstable multilayer alloys is presented based on control of the coherency strain such that the driving force for decomposition is favorably altered. Cathodic arc evaporated cubic TiCrAlN/Ti1-xCrxN multilayer coatings are used as demonstrators. Upon annealing, the coatings undergo spinodal decomposition into nanometer-sized coherent Ti- and Al-rich cubic domains which is affected by the coherency strain. In addition, the growth of the domains is restricted by the surrounding TiCrN layer compared to a non-layered TiCrAlN coating which together results in an improved thermal stability of the cubic structure. A significant hardness increase is seen during decomposition for the case with high coherency strain while a low coherency strain results in a hardness decrease for high annealing temperatures. The metal diffusion paths during the domain coarsening are affected by strain which in turn is controlled by the Cr-content (x) in the Ti1-xCrxN layers. For x = 0 the diffusion occurs both parallel and perpendicular to the growth direction but for x > =0.9 the diffusion occurs predominantly parallel to the growth direction. Altogether this study shows a structural tool to alter and fine-tune high temperature properties of multicomponent materials.
Numeric Modified Adomian Decomposition Method for Power System Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth
This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
Štursová, Martina; Bárta, Jiří; Šantrůčková, Hana; Baldrian, Petr
2016-12-01
Forests are recognised as spatially heterogeneous ecosystems. However, knowledge of the small-scale spatial variation in microbial abundance, community composition and activity is limited. Here, we aimed to describe the heterogeneity of environmental properties, namely vegetation, soil chemical composition, fungal and bacterial abundance and community composition, and enzymatic activity, in the topsoil in a small area (36 m 2 ) of a highly heterogeneous regenerating temperate natural forest, and to explore the relationships among these variables. The results demonstrated a high level of spatial heterogeneity in all properties and revealed differences between litter and soil. Fungal communities had substantially higher beta-diversity than bacterial communities, which were more uniform and less spatially autocorrelated. In litter, fungal communities were affected by vegetation and appeared to be more involved in decomposition. In the soil, chemical composition affected both microbial abundance and the rates of decomposition, whereas the effect of vegetation was small. Importantly, decomposition appeared to be concentrated in hotspots with increased activity of multiple enzymes. Overall, forest topsoil should be considered a spatially heterogeneous environment in which the mean estimates of ecosystem-level processes and microbial community composition may confound the existence of highly specific microenvironments. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Using Multiple Grids To Compute Flows
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
1991-01-01
Paper discusses decomposition of global grids into multiple patched and/or overlaid local grids in computations of fluid flow. Such "domain decomposition" particularly useful in computation of flows about complicated bodies moving relative to each other; for example, flows associated with rotors and stators in turbomachinery and rotors and fuselages in helicopters.
Empirical projection-based basis-component decomposition method
NASA Astrophysics Data System (ADS)
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
The Spatial Variability of Organic Matter and Decomposition Processes at the Marsh Scale
NASA Astrophysics Data System (ADS)
Yousefi Lalimi, Fateme; Silvestri, Sonia; D'Alpaos, Andrea; Roner, Marcella; Marani, Marco
2017-04-01
Coastal salt marshes sequester carbon as they respond to the local Rate of Relative Sea Level Rise (RRSLR) and their accretion rate is governed by inorganic soil deposition, organic soil production, and soil organic matter (SOM) decomposition. It is generally recognized that SOM plays a central role in marsh vertical dynamics, but while existing limited observations and modelling results suggest that SOME varies widely at the marsh scale, we lack systematic observations aimed at understanding how SOM production is modulated spatially as a result of biomass productivity and decomposition rate. Marsh topography and distance to the creek can affect biomass and SOM production, while a higher topographic elevation increases drainage, evapotranspiration, aeration, thereby likely inducing higher SOM decomposition rates. Data collected in salt marshes in the northern Venice Lagoon (Italy) show that, even though plant productivity decreases in the lower areas of a marsh located farther away from channel edges, the relative contribution of organic soil production to the overall vertical soil accretion tends to remain constant as the distance from the channel increases. These observations suggest that the competing effects between biomass production and aeration/decomposition determine a contribution of organic soil to total accretion which remains approximately constant with distance from the creek, in spite of the declining plant productivity. Here we test this hypothesis using new observations of SOM and decomposition rates from marshes in North Carolina. The objective is to fill the gap in our understanding of the spatial distribution, at the marsh scale, of the organic and inorganic contributions to marsh accretion in response to RRSLR.
Signal processing method and system for noise removal and signal extraction
Fu, Chi Yung; Petrich, Loren
2009-04-14
A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.
Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.
Yang, Shuyuan; Zhang, Kai; Wang, Min
2017-08-25
Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Wang, Fei; Qin, Zhihao; Li, Wenjuan; Song, Caiying; Karnieli, Arnon; Zhao, Shuhe
2015-01-01
Land surface temperature (LST) images retrieved from the thermal infrared (TIR) band data of Moderate Resolution Imaging Spectroradiometer (MODIS) have much lower spatial resolution than the MODIS visible and near-infrared (VNIR) band data. The coarse pixel scale of MODIS LST images (1000 m under nadir) have limited their capability in applying to many studies required high spatial resolution in comparison of the MODIS VNIR band data with pixel scale of 250–500 m. In this paper we intend to develop an efficient approach for pixel decomposition to increase the spatial resolution of MODIS LST image using the VNIR band data as assistance. The unique feature of this approach is to maintain the thermal radiance of parent pixels in the MODIS LST image unchanged after they are decomposed into the sub-pixels in the resulted image. There are two important steps in the decomposition: initial temperature estimation and final temperature determination. Therefore the approach can be termed double-step pixel decomposition (DSPD). Both steps involve a series of procedures to achieve the final result of decomposed LST image, including classification of the surface patterns, establishment of LST change with normalized difference of vegetation index (NDVI) and building index (NDBI), reversion of LST into thermal radiance through Planck equation, and computation of weights for the sub-pixels of the resulted image. Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) with much higher spatial resolution than MODIS data was on-board the same platform (Terra) as MODIS for Earth observation, an experiment had been done in the study to validate the accuracy and efficiency of our approach for pixel decomposition. The ASTER LST image was used as the reference to compare with the decomposed LST image. The result showed that the spatial distribution of the decomposed LST image was very similar to that of the ASTER LST image with a root mean square error (RMSE) of 2.7 K for entire image. Comparison with the evaluation DisTrad (E-DisTrad) and re-sampling methods for pixel decomposition also indicate that our DSPD has the lowest RMSE in all cases, including urban region, water bodies, and natural terrain. The obvious increase in spatial resolution remarkably uplifts the capability of the coarse MODIS LST images in highlighting the details of LST variation. Therefore it can be concluded that, in spite of complicated procedures, the proposed DSPD approach provides an alternative to improve the spatial resolution of MODIS LST image hence expand its applicability to the real world. PMID:25609048
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, W.D.; Keyes, D.E.
1988-03-01
The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution
Origin of the Chemical and Kinetic Stability of Graphene Oxide
Zhou, Si; Bongiorno, Angelo
2013-01-01
At moderate temperatures (≤ 70°C), thermal reduction of graphene oxide is inefficient and after its synthesis the material enters in a metastable state. Here, first-principles and statistical calculations are used to investigate both the low-temperature processes leading to decomposition of graphene oxide and the role of ageing on the structure and stability of this material. Our study shows that the key factor underlying the stability of graphene oxide is the tendency of the oxygen functionalities to agglomerate and form highly oxidized domains surrounded by areas of pristine graphene. Within the agglomerates of functional groups, the primary decomposition reactions are hindered by both geometrical and energetic factors. The number of reacting sites is reduced by the occurrence of local order in the oxidized domains, and due to the close packing of the oxygen functionalities, the decomposition reactions become – on average – endothermic by more than 0.6 eV. PMID:23963517
Origin of the chemical and kinetic stability of graphene oxide.
Zhou, Si; Bongiorno, Angelo
2013-01-01
At moderate temperatures (≤ 70°C), thermal reduction of graphene oxide is inefficient and after its synthesis the material enters in a metastable state. Here, first-principles and statistical calculations are used to investigate both the low-temperature processes leading to decomposition of graphene oxide and the role of ageing on the structure and stability of this material. Our study shows that the key factor underlying the stability of graphene oxide is the tendency of the oxygen functionalities to agglomerate and form highly oxidized domains surrounded by areas of pristine graphene. Within the agglomerates of functional groups, the primary decomposition reactions are hindered by both geometrical and energetic factors. The number of reacting sites is reduced by the occurrence of local order in the oxidized domains, and due to the close packing of the oxygen functionalities, the decomposition reactions become - on average - endothermic by more than 0.6 eV.
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China's exports and net exports during 2002-2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
A non-invasive implementation of a mixed domain decomposition method for frictional contact problems
NASA Astrophysics Data System (ADS)
Oumaziz, Paul; Gosselet, Pierre; Boucard, Pierre-Alain; Guinard, Stéphane
2017-11-01
A non-invasive implementation of the Latin domain decomposition method for frictional contact problems is described. The formulation implies to deal with mixed (Robin) conditions on the faces of the subdomains, which is not a classical feature of commercial software. Therefore we propose a new implementation of the linear stage of the Latin method with a non-local search direction built as the stiffness of a layer of elements on the interfaces. This choice enables us to implement the method within the open source software Code_Aster, and to derive 2D and 3D examples with similar performance as the standard Latin method.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
Characteristic-eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1989-01-01
Lumley's proper orthogonal decomposition technique is applied to the turbulent flow in a channel. Coherent structures are extracted by decomposing the velocity field into characteristic eddies with random coefficients. A generalization of the shot-noise expansion is used to determine the characteristic eddies in homogeneous spatial directions. Three different techniques are used to determine the phases of the Fourier coefficients in the expansion: (1) one based on the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Similar results are found from each of these techniques.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
García-Palacios, Pablo; Maestre, Fernando T.; Kattge, Jens; Wall, Diana H.
2015-01-01
Climate and litter quality have been identified as major drivers of litter decomposition at large spatial scales. However, the role played by soil fauna remains largely unknown, despite its importance for litter fragmentation and microbial activity. We synthesized litterbag studies to quantify the effect sizes of soil fauna on litter decomposition rates at the global and biome scales, and to assess how climate, litter quality and soil fauna interact to determine such rates. Soil fauna consistently enhanced litter decomposition at both global and biome scales (average increment ~27%). However, climate and litter quality differently modulated the effects of soil fauna on decomposition rates between biomes, from climate-driven biomes to those where climate effects were mediated by changes in litter quality. Our results advocate for the inclusion of biome-specific soil fauna effects on litter decomposition as a mean to reduce the unexplained variation in large-scale decomposition models. PMID:23763716
Liu, Zhengyan; Mao, Xianqiang; Song, Peng
2017-01-01
Temporal index decomposition analysis and spatial index decomposition analysis were applied to understand the driving forces of the emissions embodied in China’s exports and net exports during 2002–2011, respectively. The accumulated emissions embodied in exports accounted for approximately 30% of the total emissions in China; although the contribution of the sectoral total emissions intensity (technique effect) declined, the scale effect was largely responsible for the mounting emissions associated with export, and the composition effect played a largely insignificant role. Calculations of the emissions embodied in net exports suggest that China is generally in an environmentally inferior position compared with its major trade partners. The differences in the economy-wide emission intensities between China and its major trade partners were the biggest contribution to this reality, and the trade balance effect played a less important role. However, a lower degree of specialization in pollution intensive products in exports than in imports helped to reduce slightly the emissions embodied in net exports. The temporal index decomposition analysis results suggest that China should take effective measures to optimize export and supply-side structure and reduce the total emissions intensity. According to spatial index decomposition analysis, it is suggested that a more aggressive import policy was useful for curbing domestic and global emissions, and the transfer of advanced production technologies and emission control technologies from developed to developing countries should be a compulsory global environmental policy option to mitigate the possible leakage of pollution emissions caused by international trade. PMID:28441399
Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.
Nonuniform fluids in the grand canonical ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, J.K.
1982-01-01
Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less
Evaluating the performance of the particle finite element method in parallel architectures
NASA Astrophysics Data System (ADS)
Gimenez, Juan M.; Nigro, Norberto M.; Idelsohn, Sergio R.
2014-05-01
This paper presents a high performance implementation for the particle-mesh based method called particle finite element method two (PFEM-2). It consists of a material derivative based formulation of the equations with a hybrid spatial discretization which uses an Eulerian mesh and Lagrangian particles. The main aim of PFEM-2 is to solve transport equations as fast as possible keeping some level of accuracy. The method was found to be competitive with classical Eulerian alternatives for these targets, even in their range of optimal application. To evaluate the goodness of the method with large simulations, it is imperative to use of parallel environments. Parallel strategies for Finite Element Method have been widely studied and many libraries can be used to solve Eulerian stages of PFEM-2. However, Lagrangian stages, such as streamline integration, must be developed considering the parallel strategy selected. The main drawback of PFEM-2 is the large amount of memory needed, which limits its application to large problems with only one computer. Therefore, a distributed-memory implementation is urgently needed. Unlike a shared-memory approach, using domain decomposition the memory is automatically isolated, thus avoiding race conditions; however new issues appear due to data distribution over the processes. Thus, a domain decomposition strategy for both particle and mesh is adopted, which minimizes the communication between processes. Finally, performance analysis running over multicore and multinode architectures are presented. The Courant-Friedrichs-Lewy number used influences the efficiency of the parallelization and, in some cases, a weighted partitioning can be used to improve the speed-up. However the total cputime for cases presented is lower than that obtained when using classical Eulerian strategies.
Modal decomposition of turbulent supersonic cavity
NASA Astrophysics Data System (ADS)
Soni, R. K.; Arya, N.; De, A.
2018-06-01
Self-sustained oscillations in a Mach 3 supersonic cavity with a length-to-depth ratio of three are investigated using wall-modeled large eddy simulation methodology for ReD = 3.39× 105 . The unsteady data obtained through computation are utilized to investigate the spatial and temporal evolution of the flow field, especially the second invariant of the velocity tensor, while the phase-averaged data are analyzed over a feedback cycle to study the spatial structures. This analysis is accompanied by the proper orthogonal decomposition (POD) data, which reveals the presence of discrete vortices along the shear layer. The POD analysis is performed in both the spanwise and streamwise planes to extract the coherence in flow structures. Finally, dynamic mode decomposition is performed on the data sequence to obtain the dynamic information and deeper insight into the self-sustained mechanism.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Unified path integral approach to theories of diffusion-influenced reactions
NASA Astrophysics Data System (ADS)
Prüstel, Thorsten; Meier-Schellersheim, Martin
2017-08-01
Building on mathematical similarities between quantum mechanics and theories of diffusion-influenced reactions, we develop a general approach for computational modeling of diffusion-influenced reactions that is capable of capturing not only the classical Smoluchowski picture but also alternative theories, as is here exemplified by a volume reactivity model. In particular, we prove the path decomposition expansion of various Green's functions describing the irreversible and reversible reaction of an isolated pair of molecules. To this end, we exploit a connection between boundary value and interaction potential problems with δ - and δ'-function perturbation. We employ a known path-integral-based summation of a perturbation series to derive a number of exact identities relating propagators and survival probabilities satisfying different boundary conditions in a unified and systematic manner. Furthermore, we show how the path decomposition expansion represents the propagator as a product of three factors in the Laplace domain that correspond to quantities figuring prominently in stochastic spatially resolved simulation algorithms. This analysis will thus be useful for the interpretation of current and the design of future algorithms. Finally, we discuss the relation between the general approach and the theory of Brownian functionals and calculate the mean residence time for the case of irreversible and reversible reactions.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shim, Yunsic; Amar, Jacques G.
While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N{sup 3} where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scalingmore » of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary “bottlenecks” to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N{sup 1/2}. Some additional possible methods to improve the scaling of TAD are also discussed.« less
Improved scaling of temperature-accelerated dynamics using localization
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques G.
2016-07-01
While temperature-accelerated dynamics (TAD) is a powerful method for carrying out non-equilibrium simulations of systems over extended time scales, the computational cost of serial TAD increases approximately as N3 where N is the number of atoms. In addition, although a parallel TAD method based on domain decomposition [Y. Shim et al., Phys. Rev. B 76, 205439 (2007)] has been shown to provide significantly improved scaling, the dynamics in such an approach is only approximate while the size of activated events is limited by the spatial decomposition size. Accordingly, it is of interest to develop methods to improve the scaling of serial TAD. As a first step in understanding the factors which determine the scaling behavior, we first present results for the overall scaling of serial TAD and its components, which were obtained from simulations of Ag/Ag(100) growth and Ag/Ag(100) annealing, and compare with theoretical predictions. We then discuss two methods based on localization which may be used to address two of the primary "bottlenecks" to the scaling of serial TAD with system size. By implementing both of these methods, we find that for intermediate system-sizes, the scaling is improved by almost a factor of N1/2. Some additional possible methods to improve the scaling of TAD are also discussed.
NASA Astrophysics Data System (ADS)
Bertrand, G.; Comperat, M.; Lallemant, M.
1980-09-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with (110) crystallographic orientation. Temperature and pressure conditions were selected so as to obtain elliptical trihydrate domains. The study deals with the evolution, vs time, of elliptical domain dimensions and the evolution, vs water vapor pressure, of the {D}/{d} ratio of ellipse axes and on the other hand of the interface displacement rate along a given direction. The phenomena observed are not basically different from those yielded by the overall kinetic study of the solid sample. Their magnitude, however, is modulated depending on displacement direction. The results are analyzed within the scope of our study of endothermic decomposition of solids.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
We studied the spatial and temporal patterns of decomposition of roots of a desert sub-shrub, a herbaceous annual, and four species of perennial grasses at several locations on nitrogen fertilized and unfertilized transects on a Chihuahuan Desert watershed for 3.5 years. There we...
Structure and information in spatial segregation
2017-01-01
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323
Structure and information in spatial segregation.
Chodrow, Philip S
2017-10-31
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian
2017-01-01
Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658
Repeated decompositions reveal the stability of infomax decomposition of fMRI data
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2010-01-01
In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453
Testing domain general learning in an Australian lizard.
Qi, Yin; Noble, Daniel W A; Fu, Jinzhong; Whiting, Martin J
2018-06-02
A key question in cognition is whether animals that are proficient in a specific cognitive domain (domain specific hypothesis), such as spatial learning, are also proficient in other domains (domain general hypothesis) or whether there is a trade-off. Studies testing among these hypotheses are biased towards mammals and birds. To understand constraints on the evolution of cognition more generally, we need broader taxonomic and phylogenetic coverage. We used Australian eastern water skinks (Eulamprus quoyii) with known spatial learning ability in three additional tasks: an instrumental and two discrimination tasks. Under domain specific learning we predicted that lizards that were good at spatial learning would perform less well in the discrimination tasks. Conversely, we predicted that lizards that did not meet our criterion for spatial learning would likewise perform better in discrimination tasks. Lizards with domain general learning should perform approximately equally well (or poorly) in these tasks. Lizards classified as spatial learners performed no differently to non-spatial learners in both the instrumental and discrimination learning tasks. Nevertheless, lizards were proficient in all tasks. Our results reveal two patterns: domain general learning in spatial learners and domain specific learning in non-spatial learners. We suggest that delineating learning into domain general and domain specific may be overly simplistic and we need to instead focus on individual variation in learning ability, which ultimately, is likely to play a key role in fitness. These results, in combination with previously published work on this species, suggests that this species has behavioral flexibility because they are competent across multiple cognitive domains and are capable of reversal learning.
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Sari C. Saunders; Jiquan Chen; Thomas D. Drummer; Thomas R. Crow; Kimberley D. Brosofske; Eric J. Gustafson
2002-01-01
Understanding landscape organization across scales is vital for determining the impacts of management and retaining structurally and functionally diverse ecosystems. We studied the relationships of a functional variable, decomposition, to microclimatic, vegetative and structural features at multiple scales in two distinct landscapes of northern Wisconsin, USA. We hoped...
Fine root dynamics across a chronosequence of upland temperate deciduous forests
Travis W. Idol; Phillip E. Pope; Felix Jr. Ponder
2000-01-01
Following a major disturbance event in forests that removes most of the standing vegetation, patterns of fine root growth, mortality, and decomposition may be altered from the pre-disturbance conditions. The objective of this study was to describe the changes in the seasonal and spatial dynamics of fine root growth, mortality, and decomposition that occur following...
Immobilization and mineralization of N and P by heterotrophic microbes during leaf decomposition
Beth Cheever; Erika Kratzer; Jackson Webster
2012-01-01
According to theory, the rate and stoichiometry of microbial mineralization depend, in part, on nutrient availability. For microbes associated with leaves in streams, nutrients are available from both the water column and the leaf. Therefore, microbial nutrient cycling may change with nutrient availability and during leaf decomposition. We explored spatial and temporal...
García-Palacios, Pablo; Maestre, Fernando T; Kattge, Jens; Wall, Diana H
2013-08-01
Climate and litter quality have been identified as major drivers of litter decomposition at large spatial scales. However, the role played by soil fauna remains largely unknown, despite its importance for litter fragmentation and microbial activity. We synthesised litterbag studies to quantify the effect sizes of soil fauna on litter decomposition rates at the global and biome scales, and to assess how climate, litter quality and soil fauna interact to determine such rates. Soil fauna consistently enhanced litter decomposition at both global and biome scales (average increment ~ 37%). [corrected]. However, climate and litter quality differently modulated the effects of soil fauna on decomposition rates between biomes, from climate-driven biomes to those where climate effects were mediated by changes in litter quality. Our results advocate for the inclusion of biome-specific soil fauna effects on litter decomposition as a mean to reduce the unexplained variation in large-scale decomposition models. © 2013 John Wiley & Sons Ltd/CNRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miehe, Gerhard; Lauterbach, Stefan; Kleebe, Hans-Joachim
The high-resolution transmission electron microscopy (HR-TEM) is used to study, in situ, spatially resolved decomposition in individual nanocrystals of metal hydroxides and oxyhydroxides. This case study reports on the decomposition of indium hydroxide (c-In(OH){sub 3}) to bixbyite-type indium oxide (c-In{sub 2}O{sub 3}). The electron beam is focused onto a single cube-shaped In(OH){sub 3} crystal of {l_brace}100{r_brace} morphology with ca. 35 nm edge length and a sequence of HR-TEM images was recorded during electron beam irradiation. The frame-by-frame analysis of video sequences allows for the in situ, time-resolved observation of the shape and orientation of the transformed crystals, which in turnmore » enables the evaluation of the kinetics of c-In{sub 2}O{sub 3} crystallization. Supplementary material (video of the transformation) related to this article can be found online at (10.1016/j.jssc.2012.09.022). After irradiation the shape of the parent cube-shaped crystal is preserved, however, its linear dimension (edge) is reduced by the factor 1.20. The corresponding spotted selected area electron diffraction (SAED) pattern representing zone [001] of c-In(OH){sub 3} is transformed to a diffuse strongly textured ring-like pattern of c-In{sub 2}O{sub 3} that indicates the transformed cube is no longer a single crystal but is disintegrated into individual c-In{sub 2}O{sub 3} domains with the size of about 5-10 nm. The induction time of approximately 15 s is estimated from the time-resolved Fourier transforms. The volume fraction of the transformed phase (c-In{sub 2}O{sub 3}), calculated from the shrinkage of the parent c-In(OH){sub 3} crystal in the recorded HR-TEM images, is used as a measure of the kinetics of c-In{sub 2}O{sub 3} crystallization within the framework of Avrami-Erofeev formalism. The Avrami exponent of {approx}3 is characteristic for a reaction mechanism with fast nucleation at the beginning of the reaction and subsequent three-dimensional growth of nuclei with a constant growth rate. The structural transformation path in reconstructive decomposition of c-In(OH){sub 3} to c-In{sub 2}O{sub 3} is discussed in terms of (i) the displacement of hydrogen atoms that lead to breaking the hydrogen bond between OH groups of [In(OH){sub 6}] octahedra and finally to their destabilization and (ii) transformation of the vertices-shared indium-oxygen octahedra in c-In(OH){sub 3} to vertices- and edge-shared octahedra in c-In{sub 2}O{sub 3}. - Graphical abstract: Frame-by-frame analysis of video sequences recorded of HR-TEM images reveals that a single cube-shaped In(OH){sub 3} nanocrystal with {l_brace}100{r_brace} morphology decomposes into bixbyite-type In{sub 2}O{sub 3} domains while being imaged. The mechanism of this decomposition is evaluated through the analysis of the structural relationship between initial (c-In(OH){sub 3}) and transformed (c-In{sub 2}O{sub 3}) phases and though the kinetics of the decomposition followed via the time-resolved shrinkage of the initial crystal of indium hydroxide. Highlights: Black-Right-Pointing-Pointer In-situ time-resolved High Resolution Transmission Electron Microscopy. Black-Right-Pointing-Pointer Crystallographic transformation path. Black-Right-Pointing-Pointer Kinetics of the decomposition in one nanocrystal.« less
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
NASA Astrophysics Data System (ADS)
Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang
2017-01-01
Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Object detection with a multistatic array using singular value decomposition
Hallquist, Aaron T.; Chambers, David H.
2014-07-01
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
The stratospheric QBO signal in the NCEP reanalysis, 1958-2001
NASA Astrophysics Data System (ADS)
Ribera, Pedro; Gallego, David; Peña-Ortiz, Cristina; Gimeno, Luis; Garcia-Herrera, Ricardo; Hernandez, Emiliano; Calvo, Natalia
2003-07-01
The spatiotemporal evolution of the zonal wind in the stratosphere is analyzed based on the use of the NCEP reanalysis (1958-2001). MultiTaper Method-Singular Value Decomposition (MTM-SVD), a frequency-domain analysis method, is applied to isolate significant spatially-coherent variability with narrowband oscillatory character. A quasibiennial oscillation is detected as the most intense coherent signal in the stratosphere, the signal being less intense in the lower levels. There is a clear downward propagation of the signal with time at low latitudes, not evident at mid and high latitudes. There are differences in the behavior of the signal over both hemispheres, being much weaker over the SH. In the NH an anomaly in the zonal wind field, in phase with the equatorial signal, is detected at approximately 60°N. Two different areas at subtropical latitudes are detected to be characterized by wind anomalies opposed to that of the equator.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
ERIC Educational Resources Information Center
Casey, Beth M.; Lombardi, Caitlin McPherran; Pollock, Amanda; Fineman, Bonnie; Pezaris, Elizabeth
2017-01-01
This study investigated longitudinal pathways leading from early spatial skills in first-grade girls to their fifth-grade analytical math reasoning abilities (N = 138). First-grade assessments included spatial skills, verbal skills, addition/subtraction skills, and frequency of choice of a decomposition or retrieval strategy on the…
Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations
NASA Astrophysics Data System (ADS)
Barton, C.; Cai, M.
2015-12-01
Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.
Climate fails to predict wood decomposition at regional scales
NASA Astrophysics Data System (ADS)
Bradford, Mark A.; Warren, Robert J., II; Baldrian, Petr; Crowther, Thomas W.; Maynard, Daniel S.; Oldfield, Emily E.; Wieder, William R.; Wood, Stephen A.; King, Joshua R.
2014-07-01
Decomposition of organic matter strongly influences ecosystem carbon storage. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on mean responses can be irrelevant and misleading. We test whether climate controls on the decomposition rate of dead wood--a carbon stock estimated to represent 73 +/- 6 Pg carbon globally--are sensitive to the spatial scale from which they are inferred. We show that the common assumption that climate is a predominant control on decomposition is supported only when local-scale variation is aggregated into mean values. Disaggregated data instead reveal that local-scale factors explain 73% of the variation in wood decomposition, and climate only 28%. Further, the temperature sensitivity of decomposition estimated from local versus mean analyses is 1.3-times greater. Fundamental issues with mean correlations were highlighted decades ago, yet mean climate-decomposition relationships are used to generate simulations that inform management and adaptation under environmental change. Our results suggest that to predict accurately how decomposition will respond to climate change, models must account for local-scale factors that control regional dynamics.
Information criteria for quantifying loss of reversibility in parallelized KMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less
Information criteria for quantifying loss of reversibility in parallelized KMC
NASA Astrophysics Data System (ADS)
Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc
2017-01-01
Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.
Phase segregation in multiphase turbulent channel flow
NASA Astrophysics Data System (ADS)
Bianco, Federico; Soldati, Alfredo
2014-11-01
The phase segregation of a rapidly quenched mixture (namely spinodal decomposition) is numerically investigated. A phase field approach is considered. Direct numerical simulation of the coupled Navier-Stokes and Cahn-Hilliard equations is performed with spectral accuracy and focus has been put on domain growth scaling laws, in a wide range of regimes. The numerical method has been first validated against well known results of literature, then spinodal decomposition in a turbulent bounded flow (channel flow) has been considered. As for homogeneous isotropic case, turbulent fluctuations suppress the segregation process when surface tension at the interfaces is relatively low (namely low Weber number regimes). For these regimes, segregated domains size reaches a statistically steady state due to mixing and break-up phenomena. In contrast with homogenous and isotropic turbulence, the presence of mean shear, leads to a typical domain size that show a wall-distance dependence. Finally, preliminary results on the effects to the drag forces at the wall, due to phase segregation, have been discussed. Regione FVG, program PAR-FSC.
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
Spatial-frequency composite watermarking for digital image copyright protection
NASA Astrophysics Data System (ADS)
Su, Po-Chyi; Kuo, C.-C. Jay
2000-05-01
Digital watermarks can be classified into two categories according to the embedding and retrieval domain, i.e. spatial- and frequency-domain watermarks. Because the two watermarks have different characteristics and limitations, combination of them can have various interesting properties when applied to different applications. In this research, we examine two spatial-frequency composite watermarking schemes. In both cases, a frequency-domain watermarking technique is applied as a baseline structure in the system. The embedded frequency- domain watermark is robust against filtering and compression. A spatial-domain watermarking scheme is then built to compensate some deficiency of the frequency-domain scheme. The first composite scheme is to embed a robust watermark in images to convey copyright or author information. The frequency-domain watermark contains owner's identification number while the spatial-domain watermark is embedded for image registration to resist cropping attack. The second composite scheme is to embed fragile watermark for image authentication. The spatial-domain watermark helps in locating the tampered part of the image while the frequency-domain watermark indicates the source of the image and prevents double watermarking attack. Experimental results show that the two watermarks do not interfere with each other and different functionalities can be achieved. Watermarks in both domains are detected without resorting to the original image. Furthermore, the resulting watermarked image can still preserve high fidelity without serious visual degradation.
Teachers' Spatial Anxiety Relates to 1st-and 2nd-Graders' Spatial Learning
ERIC Educational Resources Information Center
Gunderson, Elizabeth A.; Ramirez, Gerardo; Beilock, Sian L.; Levine, Susan C.
2013-01-01
Teachers' anxiety about an academic domain, such as math, can impact students' learning in that domain. We asked whether this relation held in the domain of spatial skill, given the importance of spatial skill for success in math and science and its malleability at a young age. We measured 1st-and 2nd-grade teachers' spatial anxiety…
F4 , E6 and G2 exceptional gauge groups in the vacuum domain structure model
NASA Astrophysics Data System (ADS)
Shahlaei, Amir; Rafibakhsh, Shahnoosh
2018-03-01
Using a vacuum domain structure model, we calculate trivial static potentials in various representations of F4 , E6, and G2 exceptional groups by means of the unit center element. Due to the absence of the nontrivial center elements, the potential of every representation is screened at far distances. However, the linear part is observed at intermediate quark separations and is investigated by the decomposition of the exceptional group to its maximal subgroups. Comparing the group factor of the supergroup with the corresponding one obtained from the nontrivial center elements of S U (3 ) subgroup shows that S U (3 ) is not the direct cause of temporary confinement in any of the exceptional groups. However, the trivial potential obtained from the group decomposition into the S U (3 ) subgroup is the same as the potential of the supergroup itself. In addition, any regular or singular decomposition into the S U (2 ) subgroup that produces the Cartan generator with the same elements as h1, in any exceptional group, leads to the linear intermediate potential of the exceptional gauge groups. The other S U (2 ) decompositions with the Cartan generator different from h1 are still able to describe the linear potential if the number of S U (2 ) nontrivial center elements that emerge in the decompositions is the same. As a result, it is the center vortices quantized in terms of nontrivial center elements of the S U (2 ) subgroup that give rise to the intermediate confinement in the static potentials.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Characteristic eddy decomposition of turbulence in a channel
NASA Technical Reports Server (NTRS)
Moin, Parviz; Moser, Robert D.
1991-01-01
The proper orthogonal decomposition technique (Lumley's decomposition) is applied to the turbulent flow in a channel to extract coherent structures by decomposing the velocity field into characteristic eddies with random coefficients. In the homogeneous spatial directions, a generaliztion of the shot-noise expansion is used to determine the characteristic eddies. In this expansion, the Fourier coefficients of the characteristic eddy cannot be obtained from the second-order statistics. Three different techniques are used to determine the phases of these coefficients. They are based on: (1) the bispectrum, (2) a spatial compactness requirement, and (3) a functional continuity argument. Results from these three techniques are found to be similar in most respects. The implications of these techniques and the shot-noise expansion are discussed. The dominant eddy is found to contribute as much as 76 percent to the turbulent kinetic energy. In both 2D and 3D, the characteristic eddies consist of an ejection region straddled by streamwise vortices that leave the wall in the very short streamwise distance of about 100 wall units.
A test of the hierarchical model of litter decomposition.
Bradford, Mark A; Veen, G F Ciska; Bonis, Anne; Bradford, Ella M; Classen, Aimee T; Cornelissen, J Hans C; Crowther, Thomas W; De Long, Jonathan R; Freschet, Gregoire T; Kardol, Paul; Manrubia-Freixa, Marta; Maynard, Daniel S; Newman, Gregory S; Logtestijn, Richard S P; Viketoft, Maria; Wardle, David A; Wieder, William R; Wood, Stephen A; van der Putten, Wim H
2017-12-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls regulating the rate at which plant biomass is decomposed into products such as CO 2 . Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature and moisture), with the controlling effects of decomposers negligible at such broad spatial scales. Using a regional-scale litter decomposition experiment at six sites spanning from northern Sweden to southern France-and capturing both within and among site variation in putative controls-we find that contrary to predictions from the hierarchical model, decomposer (microbial) biomass strongly regulates decomposition at regional scales. Furthermore, the size of the microbial biomass dictates the absolute change in decomposition rates with changing climate variables. Our findings suggest the need for revision of the hierarchical model, with decomposers acting as both local- and broad-scale controls on litter decomposition rates, necessitating their explicit consideration in global biogeochemical models.
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Wavelet bases on the L-shaped domain
NASA Astrophysics Data System (ADS)
Jouini, Abdellatif; Lemarié-Rieusset, Pierre Gilles
2013-07-01
We present in this paper two elementary constructions of multiresolution analyses on the L-shaped domain D. In the first one, we shall describe a direct method to define an orthonormal multiresolution analysis. In the second one, we use the decomposition method for constructing a biorthogonal multiresolution analysis. These analyses are adapted for the study of the Sobolev spaces Hs(D)(s∈N).
Spatial Thinking in Atmospheric Science Education
NASA Astrophysics Data System (ADS)
McNeal, P. M.; Petcovic, H. L.; Ellis, T. D.
2016-12-01
Atmospheric science is a STEM discipline that involves the visualization of three-dimensional processes from two-dimensional maps, interpretation of computer-generated graphics and hand plotting of isopleths. Thus, atmospheric science draws heavily upon spatial thinking. Research has shown that spatial thinking ability can be a predictor of early success in STEM disciplines and substantial evidence demonstrates that spatial thinking ability is improved through various interventions. Therefore, identification of the spatial thinking skills and cognitive processes used in atmospheric science is the first step toward development of instructional strategies that target these skills and scaffold the learning of students in atmospheric science courses. A pilot study of expert and novice meteorologists identified mental animation and disembedding as key spatial skills used in the interpretation of multiple weather charts and images. Using this as a starting point, we investigated how these spatial skills, together with expertise, domain specific knowledge, and working memory capacity affect the ability to produce an accurate forecast. Participants completed a meteorology concept inventory, experience questionnaire and psychometric tests of spatial thinking ability and working memory capacity prior to completing a forecasting task. A quantitative analysis of the collected data investigated the effect of the predictor variables on the outcome task. A think-aloud protocol with individual participants provided a qualitative look at processes such as task decomposition, rule-based reasoning and the formation of mental models in an attempt to understand how individuals process this complex data and describe outcomes of particular meteorological scenarios. With our preliminary results we aim to inform atmospheric science education from a cognitive science perspective. The results point to a need to collaborate with the atmospheric science community broadly, such that multiple educational pipelines are affected including university meteorology courses for majors and non-majors, military weather forecaster preparation and professional training for operational meteorologists, thus improving student learning and the continued development of the current and future workforce.
A unifying model of concurrent spatial and temporal modularity in muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2014-02-01
Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
Yang, L. H.; Brooks III, E. D.; Belak, J.
1992-01-01
A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP) programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor) communication is minimized.
Terascale Optimal PDE Simulations (TOPS) Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Professor Olof B. Widlund
2007-07-09
Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part ofmore » the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in [24, 25, 26]. This work is based on [29, 31]. Our work over these five and half years has, in our opinion, helped advance the knowledge of domain decomposition methods significantly. We see these methods as providing valuable alternatives to other iterative methods, in particular, those based on multi-grid. In our opinion, our accomplishments also match the goals of the TOPS project quite closely.« less
2010-01-01
Background Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http://liao.cis.udel.edu/pub/svdsvm. Implemented in Matlab and supported on Linux and MS Windows. PMID:21034480
NASA Astrophysics Data System (ADS)
Mao, J.; Chen, N.; Harmon, M. E.; Li, Y.; Cao, X.; Chappell, M.
2012-12-01
Advanced 13C solid-state NMR techniques were employed to study the chemical structural changes of litter decomposition across broad spatial and long time scales. The fresh and decomposed litter samples of four species (Acer saccharum (ACSA), Drypetes glauca (DRGL), Pinus resinosa (PIRE), and Thuja plicata (THPL)) incubated for up to 10 years at four sites under different climatic conditions (from Arctic to tropical forest) were examined. Decomposition generally led to an enrichment of cutin and surface wax materials, and a depletion of carbohydrates causing overall composition to become more similar compared with original litters. However, the changes of main constituents in the four litters were inconsistent with the four litters following different pathways of decomposition at the same site. As decomposition proceeded, waxy materials decreased at the early stage and then gradually increased in PIRE; DRGL showed a significant depletion of lignin and tannin while the changes of lignin and tannin were relative small and inconsistent for ACSA and THPL. In addition, the NCH groups, which could be associated with either fungal cell wall chitin or bacterial wall petidoglycan, were enriched in all litters except THPL. Contrary to the classic lignin-enrichment hypothesis, DRGL with low-quality C substrate had the highest degree of composition changes. Furthermore, some samples had more "advanced" compositional changes in the intermediate stage of decomposition than in the highly-decomposed stage. This pattern might be attributed to the formation of new cross-linking structures, that rendered substrates more complex and difficult for enzymes to attack. Finally, litter quality overrode climate and time factors as a control of long-term changes of chemical composition.
Urban Thermodynamic Island in a Coastal City Analysed from an Optimized Surface Network
NASA Astrophysics Data System (ADS)
Pigeon, Grégoire; Lemonsu, Aude; Long, Nathalie; Barrié, Joël; Masson, Valéry; Durand, Pierre
2006-08-01
Within the framework of ESCOMPTE, a French experiment performed in June and July 2001 in the south-east of France to study the photo-oxidant pollution at the regional scale, the urban boundary layer (UBL) program focused on the study of the urban atmosphere over the coastal city of Marseille. A methodology developed to optimize a network of 20 stations measuring air temperature and moisture over the city is presented. It is based on the analysis of a numerical simulation, performed with the non-hydrostatic, mesoscale Meso-NH model, run with four nested-grids down to a horizontal resolution of 250 m over the city and including a specific parametrization for the urban surface energy balance. A three-day period was modelled and evaluated against data collected during the preparatory phase for the project in summer 2000. The simulated thermodynamic surface fields were analysed using an empirical orthogonal function (EOF) decomposition in order to determine the optimal network configuration designed to capture the dominant characteristics of the fields. It is the first attempt of application of this kind of methodology to the field of urban meteorology. The network, of 20 temperature and moisture sensors, was implemented during the UBL-ESCOMPTE experiment and continuously recorded data from 12 June to 14 July 2001. The measurements were analysed in order to assess the urban thermodynamic island spatio-temporal structure, also using EOF decomposition. During nighttime, the influence of urbanization on temperature is clear the field is characterized by concentric thermo-pleths around the old core of the city, which is the warmest area of the domain. The moisture field is more influenced by proximity to the sea and airflow patterns. During the day, the sea breeze often moves from west or south-west and consequently the spatial pattern for both parameters is characterized by a gradient perpendicular to the shoreline. Finally, in order to assess the methodology adopted, the spatial structures extracted from the simulation of the 2000 preparatory campaign and observations gathered in 2001 have been compared. They are highly correlated, which is a relevant validation of the methodology proposed. The relations between these spatial structures and geographical characteristics of the site have also been studied. High correlations between temperature spatial structure during nighttime and urban cover fraction or street aspect ratio are observed and simulated. For temperature during daytime or moisture during both daytime and nighttime these geographical factors are not correlated with thermodynamic fields spatial structures.
Vergauwe, Evie; Barrouillet, Pierre; Camos, Valérie
2009-07-01
Examinations of interference between visual and spatial materials in working memory have suggested domain- and process-based fractionations of visuo-spatial working memory. The present study examined the role of central time-based resource sharing in visuo-spatial working memory and assessed its role in obtained interference patterns. Visual and spatial storage were combined with both visual and spatial on-line processing components in computer-paced working memory span tasks (Experiment 1) and in a selective interference paradigm (Experiment 2). The cognitive load of the processing components was manipulated to investigate its impact on concurrent maintenance for both within-domain and between-domain combinations of processing and storage components. In contrast to both domain- and process-based fractionations of visuo-spatial working memory, the results revealed that recall performance was determined by the cognitive load induced by the processing of items, rather than by the domain to which those items pertained. These findings are interpreted as evidence for a time-based resource-sharing mechanism in visuo-spatial working memory.
Calibrating electromagnetic induction conductivities with time-domain reflectometry measurements
NASA Astrophysics Data System (ADS)
Dragonetti, Giovanna; Comegna, Alessandro; Ajeel, Ali; Piero Deidda, Gian; Lamaddalena, Nicola; Rodriguez, Giuseppe; Vignoli, Giulio; Coppola, Antonio
2018-02-01
This paper deals with the issue of monitoring the spatial distribution of bulk electrical conductivity, σb, in the soil root zone by using electromagnetic induction (EMI) sensors under different water and salinity conditions. To deduce the actual distribution of depth-specific σb from EMI apparent electrical conductivity (ECa) measurements, we inverted the data by using a regularized 1-D inversion procedure designed to manage nonlinear multiple EMI-depth responses. The inversion technique is based on the coupling of the damped Gauss-Newton method with truncated generalized singular value decomposition (TGSVD). The ill-posedness of the EMI data inversion is addressed by using a sharp stabilizer term in the objective function. This specific stabilizer promotes the reconstruction of blocky targets, thereby contributing to enhance the spatial resolution of the EMI results in the presence of sharp boundaries (otherwise smeared out after the application of more standard Occam-like regularization strategies searching for smooth solutions). Time-domain reflectometry (TDR) data are used as ground-truth data for calibration of the inversion results. An experimental field was divided into four transects 30 m long and 2.8 m wide, cultivated with green bean, and irrigated with water at two different salinity levels and using two different irrigation volumes. Clearly, this induces different salinity and water contents within the soil profiles. For each transect, 26 regularly spaced monitoring soundings (1 m apart) were selected for the collection of (i) Geonics EM-38 and (ii) Tektronix reflectometer data. Despite the original discrepancies in the EMI and TDR data, we found a significant correlation of the means and standard deviations of the two data series; in particular, after a low-pass spatial filtering of the TDR data. Based on these findings, this paper introduces a novel methodology to calibrate EMI-based electrical conductivities via TDR direct measurements. This calibration strategy consists of a linear mapping of the original inversion results into a new conductivity spatial distribution with the coefficients of the transformation uniquely based on the statistics of the two original measurement datasets (EMI and TDR conductivities).
Proceedings for the ICASE Workshop on Heterogeneous Boundary Conditions
NASA Technical Reports Server (NTRS)
Perkins, A. Louise; Scroggs, Jeffrey S.
1991-01-01
Domain Decomposition is a complex problem with many interesting aspects. The choice of decomposition can be made based on many different criteria, and the choice of interface of internal boundary conditions are numerous. The various regions under study may have different dynamical balances, indicating that different physical processes are dominating the flow in these regions. This conference was called in recognition of the need to more clearly define the nature of these complex problems. This proceedings is a collection of the presentations and the discussion groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidler, Rolf, E-mail: rsidler@gmail.com; Carcione, José M.; Holliger, Klaus
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in themore » radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.« less
Malinina, E S
2014-01-01
The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.
Domain decomposition algorithms and computation fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widlund, Olof B.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less
Adaptive multigrid domain decomposition solutions for viscous interacting flows
NASA Technical Reports Server (NTRS)
Rubin, Stanley G.; Srinivasan, Kumar
1992-01-01
Several viscous incompressible flows with strong pressure interaction and/or axial flow reversal are considered with an adaptive multigrid domain decomposition procedure. Specific examples include the triple deck structure surrounding the trailing edge of a flat plate, the flow recirculation in a trough geometry, and the flow in a rearward facing step channel. For the latter case, there are multiple recirculation zones, of different character, for laminar and turbulent flow conditions. A pressure-based form of flux-vector splitting is applied to the Navier-Stokes equations, which are represented by an implicit lowest-order reduced Navier-Stokes (RNS) system and a purely diffusive, higher-order, deferred-corrector. A trapezoidal or box-like form of discretization insures that all mass conservation properties are satisfied at interfacial and outflow boundaries, even for this primitive-variable, non-staggered grid computation.
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.
Unsupervised Spatial Event Detection in Targeted Domains with Applications to Civil Unrest Modeling
Zhao, Liang; Chen, Feng; Dai, Jing; Hua, Ting; Lu, Chang-Tien; Ramakrishnan, Naren
2014-01-01
Twitter has become a popular data source as a surrogate for monitoring and detecting events. Targeted domains such as crime, election, and social unrest require the creation of algorithms capable of detecting events pertinent to these domains. Due to the unstructured language, short-length messages, dynamics, and heterogeneity typical of Twitter data streams, it is technically difficult and labor-intensive to develop and maintain supervised learning systems. We present a novel unsupervised approach for detecting spatial events in targeted domains and illustrate this approach using one specific domain, viz. civil unrest modeling. Given a targeted domain, we propose a dynamic query expansion algorithm to iteratively expand domain-related terms, and generate a tweet homogeneous graph. An anomaly identification method is utilized to detect spatial events over this graph by jointly maximizing local modularity and spatial scan statistics. Extensive experiments conducted in 10 Latin American countries demonstrate the effectiveness of the proposed approach. PMID:25350136
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Altun, Ahmet; Neese, Frank; Bistoni, Giovanni
2018-01-01
The local energy decomposition (LED) analysis allows for a decomposition of the accurate domain-based local pair natural orbital CCSD(T) [DLPNO-CCSD(T)] energy into physically meaningful contributions including geometric and electronic preparation, electrostatic interaction, interfragment exchange, dynamic charge polarization, and London dispersion terms. Herein, this technique is employed in the study of hydrogen-bonding interactions in a series of conformers of water and hydrogen fluoride dimers. Initially, DLPNO-CCSD(T) dissociation energies for the most stable conformers are computed and compared with available experimental data. Afterwards, the decay of the LED terms with the intermolecular distance ( r ) is discussed and results are compared with the ones obtained from the popular symmetry adapted perturbation theory (SAPT). It is found that, as expected, electrostatic contributions slowly decay for increasing r and dominate the interaction energies in the long range. London dispersion contributions decay as expected, as r -6 . They significantly affect the depths of the potential wells. The interfragment exchange provides a further stabilizing contribution that decays exponentially with the intermolecular distance. This information is used to rationalize the trend of stability of various conformers of the water and hydrogen fluoride dimers.
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, C. Kristopher; Hauck, Cory D.
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
Garrett, C. Kristopher; Hauck, Cory D.
2018-04-05
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
Kinetics of the cellular decomposition of supersaturated solid solutions
NASA Astrophysics Data System (ADS)
Ivanov, M. A.; Naumuk, A. Yu.
2014-09-01
A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
NASA Astrophysics Data System (ADS)
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, Anthony D.; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong
2015-12-01
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr-1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil RH with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yujie; Yang, Jinyan; Zhuang, Qianlai
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here in this study we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbialmore » dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO 2 efflux (R H) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil R H (7.5 ± 2.4 PgCyr -1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4-0.6) in the simulated spatial pattern of soil R H with both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = -0.43 to -0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.« less
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; Harden, Jennifer W.; McGuire, A. David; Liu, Yaling; Wang, Gangsheng; Gu, Lianhong
2015-01-01
Soil carbon dynamics of terrestrial ecosystems play a significant role in the global carbon cycle. Microbial-based decomposition models have seen much growth recently for quantifying this role, yet dormancy as a common strategy used by microorganisms has not usually been represented and tested in these models against field observations. Here we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of microbial dormancy at six temperate forest sites of different forest types. We then extrapolated the model to global temperate forest ecosystems to investigate biogeochemical controls on soil heterotrophic respiration and microbial dormancy dynamics at different temporal-spatial scales. The dormancy model consistently produced better match with field-observed heterotrophic soil CO2 efflux (RH) than the no dormancy model. Our regional modeling results further indicated that models with dormancy were able to produce more realistic magnitude of microbial biomass (<2% of soil organic carbon) and soil RH (7.5 ± 2.4 Pg C yr−1). Spatial correlation analysis showed that soil organic carbon content was the dominating factor (correlation coefficient = 0.4–0.6) in the simulated spatial pattern of soil RHwith both models. In contrast to strong temporal and local controls of soil temperature and moisture on microbial dormancy, our modeling results showed that soil carbon-to-nitrogen ratio (C:N) was a major regulating factor at regional scales (correlation coefficient = −0.43 to −0.58), indicating scale-dependent biogeochemical controls on microbial dynamics. Our findings suggest that incorporating microbial dormancy could improve the realism of microbial-based decomposition models and enhance the integration of soil experiments and mechanistically based modeling.
NASA Astrophysics Data System (ADS)
Mensah, Serge; Franceschini, Emilie; Pauzin, Marie-Christine
2007-02-01
We introduce a near-field formulation of the acoustic field scattered by a soft tissue organ. This derivation is based on the Huygens-Fresnel principle that describes the scattered field as the result of the interferential scheme of all the secondary spherical waves. This leads us to define a new Fourier transform which yields a spectrum whose harmonic components have an elliptical spatial support. Based on these projections, we define an Elliptical Radon transform that enables us to reconstruct either the impedance or the celerity maps of an acoustical model characterized in terms of impedance and celerity fluctuations. The formulation is very similar to that developed in the far-field domain where the Radon transform pair is derived from an harmonic plane wave decomposition. This formulation allows us to introduce the Ductal Tomography, following the example of the Ductal Echography, that provides a systematic inspection of each mammary lobe, in order to reveal breast lesions at an early stage. In order to review the performances obtained with current echographs in view of specific experiment (numerical simulations), we develop a computer phantom that gains in realism. This 2-D anatomical phantom is an axial cut of the ductolobular structure corresponding to a daisy-like internal arrangement with petals (lobes) radiating around the nipple, for healthy and pathological situations. The different constitutive tissues and ducts are characterized in terms of density and celerity parameters whose spatial distributions are defined with specific random density laws. The use of a velocity-pressure formulation permits us to model time domain acoustic wave propagation. Broadband US pulses are transmitted and measured in diffraction around the breast with a ring antenna, the images are reconstructed using the elliptical back-projection-based procedure mentioned above.
Wu, Lei; Eichele, Tom; Calhoun, Vince D
2010-10-01
Concurrent EEG-fMRI studies have provided increasing details of the dynamics of intrinsic brain activity during the resting state. Here, we investigate a prominent effect in EEG during relaxed resting, i.e. the increase of the alpha power when the eyes are closed compared to when the eyes are open. This phenomenon is related to changes in thalamo-cortical and cortico-cortical synchronization. In order to investigate possible changes to EEG-fMRI coupling and fMRI functional connectivity during the two states we adopted a data-driven approach that fuses the multimodal data on the basis of parallel ICA decompositions of the fMRI data in the spatial domain and of the EEG data in the spectral domain. The power variation of a posterior alpha component was used as a reference function to deconvolve the hemodynamic responses from occipital, frontal, temporal, and subcortical fMRI components. Additionally, we computed the functional connectivity between these components. The results showed widespread alpha hemodynamic responses and high functional connectivity during eyes-closed (EC) rest, while eyes open (EO) resting abolished many of the hemodynamic responses and markedly decreased functional connectivity. These data suggest that generation of local hemodynamic responses is highly sensitive to state changes that do not involve changes of mental effort or awareness. They also indicate the localized power differences in posterior alpha between EO and EC in resting state data are accompanied by spatially widespread amplitude changes in hemodynamic responses and inter-regional functional connectivity, i.e. low frequency hemodynamic signals display an equivalent of alpha reactivity. Copyright 2010 Elsevier Inc. All rights reserved.
Teachers' Spatial Literacy as Visualization, Reasoning, and Communication
ERIC Educational Resources Information Center
Moore-Russo, Deborah; Viglietti, Janine M.; Chiu, Ming Ming; Bateman, Susan M.
2013-01-01
This paper conceptualizes spatial literacy as consisting of three overlapping domains: visualization, reasoning, and communication. By considering these domains, this study explores different aspects of spatial literacy to better understand how a group of mathematics teachers reasoned about spatial tasks. Seventy-five preservice and inservice…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor; Wardell, Barry
2011-10-15
This is the second in a series of papers aimed at developing a practical time-domain method for self-force calculations in Kerr spacetime. The key elements of the method are (i) removal of a singular part of the perturbation field with a suitable analytic 'puncture' based on the Detweiler-Whiting decomposition, (ii) decomposition of the perturbation equations in azimuthal (m-)modes, taking advantage of the axial symmetry of the Kerr background, (iii) numerical evolution of the individual m-modes in 2+1 dimensions with a finite-difference scheme, and (iv) reconstruction of the physical self-force from the mode sum. Here we report an implementation of themore » method to compute the scalar-field self-force along circular equatorial geodesic orbits around a Kerr black hole. This constitutes a first time-domain computation of the self-force in Kerr geometry. Our time-domain code reproduces the results of a recent frequency-domain calculation by Warburton and Barack, but has the added advantage of being readily adaptable to include the backreaction from the self-force in a self-consistent manner. In a forthcoming paper--the third in the series--we apply our method to the gravitational self-force (in the Lorenz gauge).« less
NASA Astrophysics Data System (ADS)
Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.
2017-12-01
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs
NASA Astrophysics Data System (ADS)
Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar
2017-07-01
A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
NASA Astrophysics Data System (ADS)
de Coligny, M.
Optimized control strategies are developed for industrial installations where many variables of energy supply and storage are involved, with a particular focus on characteristics of a solar central tower power plant. It is shown that optimal regulation resides in controlling all disturbances which occur in a limited domain of the entire system, using robust control schemes. Choosing a command is then dependent on defining precise operational limits as constraints on the machines' performances. Attention is given to the development of variational principles used for the elements of the command logic. Particular consideration is given to a limited supply in storage in spatial and temporal terms. Commands for alterations in functions are then available on-line, and discontinuities are not a feature of the control system. The strategy is applied to the case of a field of heliostats and a central tower themal receiver showing that management is possible on the basis of a sliding horizon.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Predicting detection performance with model observers: Fourier domain or spatial domain?
Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia
2016-02-27
The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.
Predicting detection performance with model observers: Fourier domain or spatial domain?
Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia
2016-01-01
The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images. PMID:27239086
Fast divide-and-conquer algorithm for evaluating polarization in classical force fields
NASA Astrophysics Data System (ADS)
Nocito, Dominique; Beran, Gregory J. O.
2017-03-01
Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.
Defect inspection using a time-domain mode decomposition technique
NASA Astrophysics Data System (ADS)
Zhu, Jinlong; Goddard, Lynford L.
2018-03-01
In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.
Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan
2004-06-05
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Kinematics of reflections in subsurface offset and angle-domain image gathers
NASA Astrophysics Data System (ADS)
Dafni, Raanan; Symes, William W.
2018-05-01
Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.
Iterative methods for elliptic finite element equations on general meshes
NASA Technical Reports Server (NTRS)
Nicolaides, R. A.; Choudhury, Shenaz
1986-01-01
Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.
RIO: a new computational framework for accurate initial data of binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-06-01
We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.
NASA Astrophysics Data System (ADS)
Spencer, Todd J.; Chen, Yu-Chun; Saha, Rajarshi; Kohl, Paul A.
2011-06-01
Incorporation of copper ions into poly(propylene carbonate) (PPC) films cast from γ-butyrolactone (GBL), trichloroethylene (TCE) or methylene chloride (MeCl) solutions containing a photo-acid generator is shown to stabilize the PPC from thermal decomposition. Copper ions were introduced into the PPC mixtures by bringing the polymer mixture into contact with copper metal. The metal was oxidized and dissolved into the PPC mixture. The dissolved copper interferes with the decomposition mechanism of PPC, raising its decomposition temperature. Thermogravimetric analysis shows that copper ions make PPC more stable by up to 50°C. Spectroscopic analysis indicates that copper ions may stabilize terminal carboxylic acid groups, inhibiting PPC decomposition. The change in thermal stability based on PPC exposure to patterned copper substrates was used to provide a self-aligned patterning method for PPC on copper traces without the need for an additional photopatterning registration step. Thermal decomposition of PPC is then used to create air isolation regions around the copper traces. The spatial resolution of the self-patterning PPC process is limited by the lateral diffusion of the copper ions within the PPC. The concentration profiles of copper within the PPC, patterning resolution, and temperature effects on the PPC decomposition have been studied.
NASA Astrophysics Data System (ADS)
Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.
2017-03-01
To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
NASA Astrophysics Data System (ADS)
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
Microbial ecological succession during municipal solid waste decomposition.
Staley, Bryan F; de Los Reyes, Francis L; Wang, Ling; Barlaz, Morton A
2018-04-28
The decomposition of landfilled refuse proceeds through distinct phases, each defined by varying environmental factors such as volatile fatty acid concentration, pH, and substrate quality. The succession of microbial communities in response to these changing conditions was monitored in a laboratory-scale simulated landfill to minimize measurement difficulties experienced at field scale. 16S rRNA gene sequences retrieved at separate stages of decomposition showed significant succession in both Bacteria and methanogenic Archaea. A majority of Bacteria sequences in landfilled refuse belong to members of the phylum Firmicutes, while Proteobacteria levels fluctuated and Bacteroidetes levels increased as decomposition proceeded. Roughly 44% of archaeal sequences retrieved under conditions of low pH and high acetate were strictly hydrogenotrophic (Methanomicrobiales, Methanobacteriales). Methanosarcina was present at all stages of decomposition. Correspondence analysis showed bacterial population shifts were attributed to carboxylic acid concentration and solids hydrolysis, while archaeal populations were affected to a higher degree by pH. T-RFLP analysis showed specific taxonomic groups responded differently and exhibited unique responses during decomposition, suggesting that species composition and abundance within Bacteria and Archaea are highly dynamic. This study shows landfill microbial demographics are highly variable across both spatial and temporal transects.
Cai, Andong; Liang, Guopeng; Zhang, Xubo; Zhang, Wenju; Li, Ling; Rui, Yichao; Xu, Minggang; Luo, Yiqi
2018-05-01
Understanding drivers of straw decomposition is essential for adopting appropriate management practice to improve soil fertility and promote carbon (C) sequestration in agricultural systems. However, predicting straw decomposition and characteristics is difficult because of the interactions between many factors related to straw properties, soil properties, and climate, especially under future climate change conditions. This study investigated the driving factors of straw decomposition of six types of crop straw including wheat, maize, rice, soybean, rape, and other straw by synthesizing 1642 paired data from 98 published papers at spatial and temporal scales across China. All the data derived from the field experiments using little bags over twelve years. Overall, despite large differences in climatic and soil properties, the remaining straw carbon (C, %) could be accurately represented by a three-exponent equation with thermal time (accumulative temperature). The lignin/nitrogen and lignin/phosphorus ratios of straw can be used to define the size of labile, intermediate, and recalcitrant C pool. The remaining C for an individual type of straw in the mild-temperature zone was higher than that in the warm-temperature and subtropical zone within one calendar year. The remaining straw C after one thermal year was 40.28%, 37.97%, 37.77%, 34.71%, 30.87%, and 27.99% for rice, soybean, rape, wheat, maize, and other straw, respectively. Soil available nitrogen and phosphorus influenced the remaining straw C at different decomposition stages. For one calendar year, the total amount of remaining straw C was estimated to be 29.41 Tg and future temperature increase of 2 °C could reduce the remaining straw C by 1.78 Tg. These findings confirmed the long-term straw decomposition could be mainly driven by temperature and straw quality, and quantitatively predicted by thermal time with the three-exponent equation for a wide array of straw types at spatial and temporal scales in agro-ecosystems of China. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Wei; Wang, Jin, E-mail: jin.wang.1@stonybrook.edu; State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, 130022 Changchun, China and College of Physics, Jilin University, 130021 Changchun
We have established a general non-equilibrium thermodynamic formalism consistently applicable to both spatially homogeneous and, more importantly, spatially inhomogeneous systems, governed by the Langevin and Fokker-Planck stochastic dynamics with multiple state transition mechanisms, using the potential-flux landscape framework as a bridge connecting stochastic dynamics with non-equilibrium thermodynamics. A set of non-equilibrium thermodynamic equations, quantifying the relations of the non-equilibrium entropy, entropy flow, entropy production, and other thermodynamic quantities, together with their specific expressions, is constructed from a set of dynamical decomposition equations associated with the potential-flux landscape framework. The flux velocity plays a pivotal role on both the dynamic andmore » thermodynamic levels. On the dynamic level, it represents a dynamic force breaking detailed balance, entailing the dynamical decomposition equations. On the thermodynamic level, it represents a thermodynamic force generating entropy production, manifested in the non-equilibrium thermodynamic equations. The Ornstein-Uhlenbeck process and more specific examples, the spatial stochastic neuronal model, in particular, are studied to test and illustrate the general theory. This theoretical framework is particularly suitable to study the non-equilibrium (thermo)dynamics of spatially inhomogeneous systems abundant in nature. This paper is the second of a series.« less
NASA Astrophysics Data System (ADS)
Smith, Nathan; Provatas, Nikolas
Recent experimental work has shown that gold nanoparticles can precipitate from an aqueous solution through a non-classical, multi-step nucleation process. This multi-step process begins with spinodal decomposition into solute-rich and solute-poor liquid domains followed by nucleation from within the solute-rich domains. We present a binary phase-field crystal theory that shows the same phenomology and examine various cross-over regimes in the growth and coarsening of liquid and solid domains. We'd like to the thank Canada Research Chairs (CRC) program for funding this work.
Insight into the novel inhibition mechanism of apigenin to Pneumolysin by molecular modeling
NASA Astrophysics Data System (ADS)
Niu, Xiaodi; Yang, Yanan; Song, Meng; Wang, Guizhen; Sun, Lin; Gao, Yawen; Wang, Hongsu
2017-11-01
In this study, the mechanism of apigenin inhibition was explored using molecular modelling, binding energy calculation, and mutagenesis assays. Energy decomposition analysis indicated that apigenin binds in the gap between domains 3 and 4 of PLY. Using principal component analysis, we found that binding of apigenin to PLY weakens the motion of domains 3 and 4. Consequently, these domains cannot complete the transition from monomer to oligomer, thereby blocking oligomerisation of PLY and counteracting its haemolytic activity. This inhibitory mechanism was confirmed by haemolysis assays, and these findings will promote the future development of an antimicrobial agent.
DOT National Transportation Integrated Search
2006-05-08
This paper describes the integration of wavelet analysis and time-domain beamforming : of microphone array output signals for analyzing the acoustic emissions from airplane : generated wake vortices. This integrated process provides visual and quanti...
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Simultaneous storage of medical images in the spatial and frequency domain: A comparative study
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan
2004-01-01
Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899
Zhang, Zeng-yan; Ji, Te; Zhu, Zhi-yong; Zhao, Hong-wei; Chen, Min; Xiao, Ti-qiao; Guo, Zhi
2015-01-01
Terahertz radiation is an electromagnetic radiation in the range between millimeter waves and far infrared. Due to its low energy and non-ionizing characters, THz pulse imaging emerges as a novel tool in many fields, such as material, chemical, biological medicine, and food safety. Limited spatial resolution is a significant restricting factor of terahertz imaging technology. Near field imaging method was proposed to improve the spatial resolution of terahertz system. Submillimeter scale's spauial resolution can be achieved if the income source size is smaller than the wawelength of the incoming source and the source is very close to the sample. But many changes were needed to the traditional terahertz time domain spectroscopy system, and it's very complex to analyze sample's physical parameters through the terahertz signal. A method of inserting a pinhole upstream to the sample was first proposed in this article to improve the spatial resolution of traditional terahertz time domain spectroscopy system. The measured spatial resolution of terahertz time domain spectroscopy system by knife edge method can achieve spatial resolution curves. The moving stage distance between 10 % and 90 Yo of the maximum signals respectively was defined as the, spatial resolution of the system. Imaging spatial resolution of traditional terahertz time domain spectroscopy system was improved dramatically after inserted a pinhole with diameter 0. 5 mm, 2 mm upstream to the sample. Experimental results show that the spatial resolution has been improved from 1. 276 mm to 0. 774 mm, with the increment about 39 %. Though this simple method, the spatial resolution of traditional terahertz time domain spectroscopy system was increased from millimeter scale to submillimeter scale. A pinhole with diameter 1 mm on a polyethylene plate was taken as sample, to terahertz imaging study. The traditional terahertz time domain spectroscopy system and pinhole inserted terahertz time domain spectroscopy system were applied in the imaging experiment respectively. The relative THz-power loss imaging of samples were use in this article. This method generally delivers the best signal to noise ratio in loss images, dispersion effects are cancelled. Terahertz imaging results show that the sample's boundary was more distinct after inserting the pinhole in front of, sample. The results also conform that inserting pinhole in front of sample can improve the imaging spatial resolution effectively. The theoretical analyses of the method which improve the spatial resolution by inserting a pinhole in front of sample were given in this article. The analyses also indicate that the smaller the pinhole size, the longer spatial coherence length of the system, the better spatial resolution of the system. At the same time the terahertz signal will be reduced accordingly. All the experimental results and theoretical analyses indicate that the method of inserting a pinhole in front of sample can improve the spatial resolution of traditional terahertz time domain spectroscopy system effectively, and it will further expand the application of terahertz imaging technology.
On LSB Spatial Domain Steganography and Channel Capacity
2008-03-21
reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Partial information decomposition as a spatiotemporal filter.
Flecker, Benjamin; Alford, Wesley; Beggs, John M; Williams, Paul L; Beer, Randall D
2011-09-01
Understanding the mechanisms of distributed computation in cellular automata requires techniques for characterizing the emergent structures that underlie information processing in such systems. Recently, techniques from information theory have been brought to bear on this problem. Building on this work, we utilize the new technique of partial information decomposition to show that previous information-theoretic measures can confound distinct sources of information. We then propose a new set of filters and demonstrate that they more cleanly separate out the background domains, particles, and collisions that are typically associated with information storage, transfer, and modification in cellular automata.
Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien
2018-01-01
The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576
Fractal analysis of time varying data
Vo-Dinh, Tuan; Sadana, Ajit
2002-01-01
Characteristics of time varying data, such as an electrical signal, are analyzed by converting the data from a temporal domain into a spatial domain pattern. Fractal analysis is performed on the spatial domain pattern, thereby producing a fractal dimension D.sub.F. The fractal dimension indicates the regularity of the time varying data.
2012-09-01
Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the
Koczyk, Grzegorz; Berezovsky, Igor N.
2008-01-01
Domain hierarchy and closed loops (DHcL) (http://sitron.bccs.uib.no/dhcl/) is a web server that delineates energy hierarchy of protein domain structure and detects domains at different levels of this hierarchy. The server also identifies closed loops and van der Waals locks, which constitute a structural basis for the protein domain hierarchy. The DHcL can be a useful tool for an express analysis of protein structures and their alternative domain decompositions. The user submits a PDB identifier(s) or uploads a 3D protein structure in a PDB format. The results of the analysis are the location of domains at different levels of hierarchy, closed loops, van der Waals locks and their interactive visualization. The server maintains a regularly updated database of domains, closed loop and van der Waals locks for all X-ray structures in PDB. DHcL server is available at: http://sitron.bccs.uib.no/dhcl. PMID:18502776
Spatial organization of chromatin domains and compartments in single chromosomes
NASA Astrophysics Data System (ADS)
Wang, Siyuan; Su, Jun-Han; Beliveau, Brian; Bintu, Bogdan; Moffitt, Jeffrey; Wu, Chao-Ting; Zhuang, Xiaowei
The spatial organization of chromatin critically affects genome function. Recent chromosome-conformation-capture studies have revealed topologically associating domains (TADs) as a conserved feature of chromatin organization, but how TADs are spatially organized in individual chromosomes remains unknown. Here, we developed an imaging method for mapping the spatial positions of numerous genomic regions along individual chromosomes and traced the positions of TADs in human interphase autosomes and X chromosomes. We observed that chromosome folding deviates from the ideal fractal-globule model at large length scales and that TADs are largely organized into two compartments spatially arranged in a polarized manner in individual chromosomes. Active and inactive X chromosomes adopt different folding and compartmentalization configurations. These results suggest that the spatial organization of chromatin domains can change in response to regulation.
Meegan, Daniel V; Honsberger, Michael J M
2005-05-01
Many neuroimaging studies have been designed to differentiate domain-specific processes in the brain. A common design constraint is to use identical stimuli for different domain-specific tasks. For example, an experiment investigating spatial versus identity processing would present compound spatial-identity stimuli in both spatial and identity tasks, and participants would be instructed to attend to, encode, maintain, or retrieve spatial information in the spatial task, and identity information in the identity task. An assumption in such studies is that spatial information will not be processed in the identity task, as it is irrelevant for that task. We report three experiments demonstrating violations of this assumption. Our results suggest that comparisons of spatial and identity tasks in existing neuroimaging studies have underestimated the amount of brain activation that is spatial-specific. For future neuroimaging studies, we recommend unique stimulus displays for each domain-specific task, and event-related measurement of post-stimulus processing.
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
How does spatial extent of fMRI datasets affect independent component analysis decomposition?
Aragri, Adriana; Scarabino, Tommaso; Seifritz, Erich; Comani, Silvia; Cirillo, Sossio; Tedeschi, Gioacchino; Esposito, Fabrizio; Di Salle, Francesco
2006-09-01
Spatial independent component analysis (sICA) of functional magnetic resonance imaging (fMRI) time series can generate meaningful activation maps and associated descriptive signals, which are useful to evaluate datasets of the entire brain or selected portions of it. Besides computational implications, variations in the input dataset combined with the multivariate nature of ICA may lead to different spatial or temporal readouts of brain activation phenomena. By reducing and increasing a volume of interest (VOI), we applied sICA to different datasets from real activation experiments with multislice acquisition and single or multiple sensory-motor task-induced blood oxygenation level-dependent (BOLD) signal sources with different spatial and temporal structure. Using receiver operating characteristics (ROC) methodology for accuracy evaluation and multiple regression analysis as benchmark, we compared sICA decompositions of reduced and increased VOI fMRI time-series containing auditory, motor and hemifield visual activation occurring separately or simultaneously in time. Both approaches yielded valid results; however, the results of the increased VOI approach were spatially more accurate compared to the results of the decreased VOI approach. This is consistent with the capability of sICA to take advantage of extended samples of statistical observations and suggests that sICA is more powerful with extended rather than reduced VOI datasets to delineate brain activity. (c) 2006 Wiley-Liss, Inc.
Guo, Qiang; Qi, Liangang
2017-04-10
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Guo, Qiang; Qi, Liangang
2017-01-01
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290
NASA Astrophysics Data System (ADS)
Sancarlos-González, Abel; Pineda-Sanchez, Manuel; Puche-Panadero, Ruben; Sapena-Bano, Angel; Riera-Guasp, Martin; Martinez-Roman, Javier; Perez-Cruz, Juan; Roger-Folch, Jose
2017-12-01
AC lines of industrial busbar systems are usually built using conductors with rectangular cross sections, where each phase can have several parallel conductors to carry high currents. The current density in a rectangular conductor, under sinusoidal conditions, is not uniform. It depends on the frequency, on the conductor shape, and on the distance between conductors, due to the skin effect and to proximity effects. Contrary to circular conductors, there are not closed analytical formulas for obtaining the frequency-dependent impedance of conductors with rectangular cross-section. It is necessary to resort to numerical simulations to obtain the resistance and the inductance of the phases, one for each desired frequency and also for each distance between the phases' conductors. On the contrary, the use of the parametric proper generalized decomposition (PGD) allows to obtain the frequency-dependent impedance of an AC line for a wide range of frequencies and distances between the phases' conductors by solving a single simulation in a 4D domain (spatial coordinates x and y, the frequency and the separation between conductors). In this way, a general "virtual chart" solution is obtained, which contains the solution for any frequency and for any separation of the conductors, and stores it in a compact separated representations form, which can be easily embedded on a more general software for the design of electrical installations. The approach presented in this work for rectangular conductors can be easily extended to conductors with an arbitrary shape.
A partitioning strategy for nonuniform problems on multiprocessors
NASA Technical Reports Server (NTRS)
Berger, M. J.; Bokhari, S.
1985-01-01
The partitioning of a problem on a domain with unequal work estimates in different subddomains is considered in a way that balances the work load across multiple processors. Such a problem arises for example in solving partial differential equations using an adaptive method that places extra grid points in certain subregions of the domain. A binary decomposition of the domain is used to partition it into rectangles requiring equal computational effort. The communication costs of mapping this partitioning onto different microprocessors: a mesh-connected array, a tree machine and a hypercube is then studied. The communication cost expressions can be used to determine the optimal depth of the above partitioning.
Non-equilibrium theory of arrested spinodal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olais-Govea, José Manuel; López-Flores, Leticia; Medina-Noyola, Magdaleno
The non-equilibrium self-consistent generalized Langevin equation theory of irreversible relaxation [P. E. Ramŕez-González and M. Medina-Noyola, Phys. Rev. E 82, 061503 (2010); 82, 061504 (2010)] is applied to the description of the non-equilibrium processes involved in the spinodal decomposition of suddenly and deeply quenched simple liquids. For model liquids with hard-sphere plus attractive (Yukawa or square well) pair potential, the theory predicts that the spinodal curve, besides being the threshold of the thermodynamic stability of homogeneous states, is also the borderline between the regions of ergodic and non-ergodic homogeneous states. It also predicts that the high-density liquid-glass transition line, whosemore » high-temperature limit corresponds to the well-known hard-sphere glass transition, at lower temperature intersects the spinodal curve and continues inside the spinodal region as a glass-glass transition line. Within the region bounded from below by this low-temperature glass-glass transition and from above by the spinodal dynamic arrest line, we can recognize two distinct domains with qualitatively different temperature dependence of various physical properties. We interpret these two domains as corresponding to full gas-liquid phase separation conditions and to the formation of physical gels by arrested spinodal decomposition. The resulting theoretical scenario is consistent with the corresponding experimental observations in a specific colloidal model system.« less
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Abstract Spatial Reasoning as an Autistic Strength
Stevenson, Jennifer L.; Gernsbacher, Morton Ann
2013-01-01
Autistic individuals typically excel on spatial tests that measure abstract reasoning, such as the Block Design subtest on intelligence test batteries and the Raven’s Progressive Matrices nonverbal test of intelligence. Such well-replicated findings suggest that abstract spatial processing is a relative and perhaps absolute strength of autistic individuals. However, previous studies have not systematically varied reasoning level – concrete vs. abstract – and test domain – spatial vs. numerical vs. verbal, which the current study did. Autistic participants (N = 72) and non-autistic participants (N = 72) completed a battery of 12 tests that varied by reasoning level (concrete vs. abstract) and domain (spatial vs. numerical vs. verbal). Autistic participants outperformed non-autistic participants on abstract spatial tests. Non-autistic participants did not outperform autistic participants on any of the three domains (spatial, numerical, and verbal) or at either of the two reasoning levels (concrete and abstract), suggesting similarity in abilities between autistic and non-autistic individuals, with abstract spatial reasoning as an autistic strength. PMID:23533615
Separate but correlated: The latent structure of space and mathematics across development.
Mix, Kelly S; Levine, Susan C; Cheng, Yi-Ling; Young, Chris; Hambrick, D Zachary; Ping, Raedy; Konstantopoulos, Spyros
2016-09-01
The relations among various spatial and mathematics skills were assessed in a cross-sectional study of 854 children from kindergarten, third, and sixth grades (i.e., 5 to 13 years of age). Children completed a battery of spatial mathematics tests and their scores were submitted to exploratory factor analyses both within and across domains. In the within domain analyses, all of the measures formed single factors at each age, suggesting consistent, unitary structures across this age range. Yet, as in previous work, the 2 domains were highly correlated, both in terms of overall composite score and pairwise comparisons of individual tasks. When both spatial and mathematics scores were submitted to the same factor analysis, the 2 domain specific factors again emerged, but there also were significant cross-domain factor loadings that varied with age. Multivariate regressions replicated the factor analysis and further revealed that mental rotation was the best predictor of mathematical performance in kindergarten, and visual-spatial working memory was the best predictor of mathematical performance in sixth grade. The mathematical tasks that predicted the most variance in spatial skill were place value (K, 3rd, 6th), word problems (3rd, 6th), calculation (K), fraction concepts (3rd), and algebra (6th). Thus, although spatial skill and mathematics each have strong internal structures, they also share significant overlap, and have particularly strong cross-domain relations for certain tasks. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Intelligent automated surface grid generation
NASA Technical Reports Server (NTRS)
Yao, Ke-Thia; Gelsey, Andrew
1995-01-01
The goal of our research is to produce a flexible, general grid generator for automated use by other programs, such as numerical optimizers. The current trend in the gridding field is toward interactive gridding. Interactive gridding more readily taps into the spatial reasoning abilities of the human user through the use of a graphical interface with a mouse. However, a sometimes fruitful approach to generating new designs is to apply an optimizer with shape modification operators to improve an initial design. In order for this approach to be useful, the optimizer must be able to automatically grid and evaluate the candidate designs. This paper describes and intelligent gridder that is capable of analyzing the topology of the spatial domain and predicting approximate physical behaviors based on the geometry of the spatial domain to automatically generate grids for computational fluid dynamics simulators. Typically gridding programs are given a partitioning of the spatial domain to assist the gridder. Our gridder is capable of performing this partitioning. This enables the gridder to automatically grid spatial domains of wide range of configurations.
Matching multiple rigid domain decompositions of proteins
Flynn, Emily; Streinu, Ileana
2017-01-01
We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528
Bailey, Vanessa L.; Smith, A. P.; Tfaily, Malak; ...
2017-01-11
Spatial isolation of soil organic carbon (SOC) in different sized pores may be a mechanism by which otherwise labile carbon (C) could be protected in soils. When soil water content increases, the hydrologic connectivity of soil pores also increases, allowing greater transport of SOC and other resources from protected locations, to microbially colonized locations more favorable to decomposition. The heterogeneous distribution of specialized decomposers, C, and other resources throughout the soil indicates that the metabolism or persistence of soil C compounds is highly dependent on short-distance transport processes. The objective of this research was to characterize the complexity of Cmore » in pore waters held at weak and strong water tensions (effectively soil solution held behind coarse- and fine-pore throats, respectively) and evaluate the microbial decomposability of these pore waters. We saturated intact soil cores and extracted pore waters with increasing suction pressures to sequentially sample pore waters from increasingly fine pore domains. Ultrahigh resolution mass spectrometry of the SOC was used to profile the major biochemical classes (i.e., lipids, proteins, lignin, carbohydrates, and condensed aromatics) of compounds present in the pore waters; some of these samples were then used as substrates for growth of Cellvibrio japonicus (DSMZ 16018), Streptomyces cellulosae (ATCC ® 25439™), and Trichoderma reseei (QM6a) in 7 day incubations. The soluble C in finer pores was more complex than the soluble C in coarser pores, and the incubations revealed that the more complex C in these fine pores is not recalcitrant. The decomposition of this complex C led to greater losses of C through respiration than the simpler C from coarser pore waters. Our research suggests that soils that experience repeated cycles of drying and wetting may be accompanied by repeated cycles of increased CO 2 fluxes that are driven by i) the transport of C from protected pools into active, ii) the chemical quality of the potentially soluble C, and iii) the type of microorganisms most likely to metabolize this C.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Vanessa L.; Smith, A. P.; Tfaily, Malak
Spatial isolation of soil organic carbon (SOC) in different sized pores may be a mechanism by which otherwise labile carbon (C) could be protected in soils. When soil water content increases, the hydrologic connectivity of soil pores also increases, allowing greater transport of SOC and other resources from protected locations, to microbially colonized locations more favorable to decomposition. The heterogeneous distribution of specialized decomposers, C, and other resources throughout the soil indicates that the metabolism or persistence of soil C compounds is highly dependent on short-distance transport processes. The objective of this research was to characterize the complexity of Cmore » in pore waters held at weak and strong water tensions (effectively soil solution held behind coarse- and fine-pore throats, respectively) and evaluate the microbial decomposability of these pore waters. We saturated intact soil cores and extracted pore waters with increasing suction pressures to sequentially sample pore waters from increasingly fine pore domains. Ultrahigh resolution mass spectrometry of the SOC was used to profile the major biochemical classes (i.e., lipids, proteins, lignin, carbohydrates, and condensed aromatics) of compounds present in the pore waters; some of these samples were then used as substrates for growth of Cellvibrio japonicus (DSMZ 16018), Streptomyces cellulosae (ATCC ® 25439™), and Trichoderma reseei (QM6a) in 7 day incubations. The soluble C in finer pores was more complex than the soluble C in coarser pores, and the incubations revealed that the more complex C in these fine pores is not recalcitrant. The decomposition of this complex C led to greater losses of C through respiration than the simpler C from coarser pore waters. Our research suggests that soils that experience repeated cycles of drying and wetting may be accompanied by repeated cycles of increased CO 2 fluxes that are driven by i) the transport of C from protected pools into active, ii) the chemical quality of the potentially soluble C, and iii) the type of microorganisms most likely to metabolize this C.« less
NASA Astrophysics Data System (ADS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun
2015-10-19
Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.
Chemical stability of molten 2,4,6-trinitrotoluene at high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dattelbaum, Dana M., E-mail: danadat@lanl.gov; Chellappa, Raja S.; Bowden, Patrick R.
2014-01-13
2,4,6-trinitrotoluene (TNT) is a molecular explosive that exhibits chemical stability in the molten phase at ambient pressure. A combination of visual, spectroscopic, and structural (x-ray diffraction) methods coupled to high pressure, resistively heated diamond anvil cells was used to determine the melt and decomposition boundaries to >15 GPa. The chemical stability of molten TNT was found to be limited, existing in a small domain of pressure-temperature conditions below 2 GPa. Decomposition dominates the phase diagram at high temperatures beyond 6 GPa. From the calculated bulk temperature rise, we conclude that it is unlikely that TNT melts on its principal Hugoniot.
Efficient implementation of a 3-dimensional ADI method on the iPSC/860
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Wijngaart, R.F.
1993-12-31
A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.
Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato
2018-05-01
We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
An intelligent user interface for browsing satellite data catalogs
NASA Technical Reports Server (NTRS)
Cromp, Robert F.; Crook, Sharon
1989-01-01
A large scale domain-independent spatial data management expert system that serves as a front-end to databases containing spatial data is described. This system is unique for two reasons. First, it uses spatial search techniques to generate a list of all the primary keys that fall within a user's spatial constraints prior to invoking the database management system, thus substantially decreasing the amount of time required to answer a user's query. Second, a domain-independent query expert system uses a domain-specific rule base to preprocess the user's English query, effectively mapping a broad class of queries into a smaller subset that can be handled by a commercial natural language processing system. The methods used by the spatial search module and the query expert system are explained, and the system architecture for the spatial data management expert system is described. The system is applied to data from the International Ultraviolet Explorer (IUE) satellite, and results are given.
Detection of the secondary meridional circulation associated with the quasi-biennial oscillation
NASA Astrophysics Data System (ADS)
Ribera, P.; PeñA-Ortiz, C.; Garcia-Herrera, R.; Gallego, D.; Gimeno, L.; HernáNdez, E.
2004-09-01
The quasi-biennial oscillation (QBO) signal in stratospheric zonal and meridional wind, temperature, and geopotential height fields is analyzed based on the use of the National Centers for Environmental Prediction (NCEP) reanalysis (1958-2001). The multitaper method-singular value decomposition (MTM-SVD), a multivariate frequency domain analysis method, is used to detect significant and spatially coherent narrowband oscillations. The QBO is found as the most intense signal in the stratospheric zonal wind. Then, the MTM-SVD method is used to determine the patterns induced by the QBO at every stratospheric level and data field. The secondary meridional circulation associated with the QBO is identified in the obtained patterns. This circulation can be characterized by negative (positive) temperature anomalies associated with adiabatic rising (sinking) motions over zones of easterly (westerly) wind shear and over the subtropics and midlatitudes, while meridional convergence and divergence levels are found separated by a level of maximum zonal wind shear. These vertical and meridional motions form quasi-symmetric circulation cells over both hemispheres, though less intense in the Southern Hemisphere.
Performance of a parallel thermal-hydraulics code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fann, G.I.; Trent, D.S.
The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scalingmore » performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.« less
A spectral-finite difference solution of the Navier-Stokes equations in three dimensions
NASA Astrophysics Data System (ADS)
Alfonsi, Giancarlo; Passoni, Giuseppe; Pancaldo, Lea; Zampaglione, Domenico
1998-07-01
A new computational code for the numerical integration of the three-dimensional Navier-Stokes equations in their non-dimensional velocity-pressure formulation is presented. The system of non-linear partial differential equations governing the time-dependent flow of a viscous incompressible fluid in a channel is managed by means of a mixed spectral-finite difference method, in which different numerical techniques are applied: Fourier decomposition is used along the homogeneous directions, second-order Crank-Nicolson algorithms are employed for the spatial derivatives in the direction orthogonal to the solid walls and a fourth-order Runge-Kutta procedure is implemented for both the calculation of the convective term and the time advancement. The pressure problem, cast in the Helmholtz form, is solved with the use of a cyclic reduction procedure. No-slip boundary conditions are used at the walls of the channel and cyclic conditions are imposed at the other boundaries of the computing domain.Results are provided for different values of the Reynolds number at several time steps of integration and are compared with results obtained by other authors.
Dual-domain point diffraction interferometer
Naulleau, Patrick P.; Goldberg, Kenneth Alan
2000-01-01
A hybrid spatial/temporal-domain point diffraction interferometer (referred to as the dual-domain PS/PDI) that is capable of suppressing the scattered-reference-light noise that hinders the conventional PS/PDI is provided. The dual-domain PS/PDI combines the separate noise-suppression capabilities of the widely-used phase-shifting and Fourier-transform fringe pattern analysis methods. The dual-domain PS/PDI relies on both a more restrictive implementation of the image plane PS/PDI mask and a new analysis method to be applied to the interferograms generated and recorded by the modified PS/PDI. The more restrictive PS/PDI mask guarantees the elimination of spatial-frequency crosstalk between the signal and the scattered-light noise arising from scattered-reference-light interfering with the test beam. The new dual-domain analysis method is then used to eliminate scattered-light noise arising from both the scattered-reference-light interfering with the test beam and the scattered-reference-light interfering with the "true" pinhole-diffracted reference light. The dual-domain analysis method has also been demonstrated to provide performance enhancement when using the non-optimized standard PS/PDI design. The dual-domain PS/PDI is essentially a three-tiered filtering system composed of lowpass spatial-filtering the test-beam electric field using the more restrictive PS/PDI mask, bandpass spatial-filtering the individual interferogram irradiance frames making up the phase-shifting series, and bandpass temporal-filtering the phase-shifting series as a whole.
NASA Astrophysics Data System (ADS)
Mokrý, Pavel; Psota, Pavel; Steiger, Kateřina; Václavík, Jan; Vápenka, David; Doleček, Roman; Vojtíšek, Petr; Sládek, Juraj; Lédl, Vít.
2016-11-01
We report on the development and implementation of the digital holographic tomography for the three-dimensio- nal (3D) observations of the domain patterns in the ferroelectric single crystals. Ferroelectric materials represent a group of materials, whose macroscopic dielectric, electromechanical, and elastic properties are greatly in uenced by the presence of domain patterns. Understanding the role of domain patterns on the aforementioned properties require the experimental techniques, which allow the precise 3D measurements of the spatial distribution of ferroelectric domains in the single crystal. Unfortunately, such techniques are rather limited at this time. The most frequently used piezoelectric atomic force microscopy allows 2D observations on the ferroelectric sample surface. Optical methods based on the birefringence measurements provide parameters of the domain patterns averaged over the sample volume. In this paper, we analyze the possibility that the spatial distribution of the ferroelectric domains can be obtained by means of the measurement of the wavefront deformation of the transmitted optical wave. We demonstrate that the spatial distribution of the ferroelectric domains can be determined by means of the measurement of the spatial distribution of the refractive index. Finally, it is demonstrated that the measurements of wavefront deformations generated in ferroelectric polydomain systems with small variations of the refractive index provide data, which can be further processed by means of the conventional tomographic methods.
The interaction of process and domain in prefrontal cortex during inductive reasoning
Babcock, Laura; Vallesi, Antonino
2015-01-01
Inductive reasoning is an everyday process that allows us to make sense of the world by creating rules from a series of instances. Consistent with accounts of process-based fractionations of the prefrontal cortex (PFC) along the left–right axis, inductive reasoning has been reliably localized to left PFC. However, these results may be confounded by the task domain, which is typically verbal. Indeed, some studies show that right PFC activation is seen with spatial tasks. This study used fMRI to examine the effects of process and domain on the brain regions recruited during a novel pattern discovery task. Twenty healthy young adult participants were asked to discover the rule underlying the presentation of a series of letters in varied spatial locations. The rules were either verbal (pertaining to a single semantic category) or spatial (geometric figures). Bilateral ventrolateral PFC activations were seen for the spatial domain, while the verbal domain showed only left ventrolateral PFC. A conjunction analysis revealed that the two domains recruited a common region of left ventrolateral PFC. The data support a central role of left PFC in inductive reasoning. Importantly, they also suggest that both process and domain shape the localization of reasoning in the brain. PMID:25498406
The interaction of process and domain in prefrontal cortex during inductive reasoning.
Babcock, Laura; Vallesi, Antonino
2015-01-01
Inductive reasoning is an everyday process that allows us to make sense of the world by creating rules from a series of instances. Consistent with accounts of process-based fractionations of the prefrontal cortex (PFC) along the left-right axis, inductive reasoning has been reliably localized to left PFC. However, these results may be confounded by the task domain, which is typically verbal. Indeed, some studies show that right PFC activation is seen with spatial tasks. This study used fMRI to examine the effects of process and domain on the brain regions recruited during a novel pattern discovery task. Twenty healthy young adult participants were asked to discover the rule underlying the presentation of a series of letters in varied spatial locations. The rules were either verbal (pertaining to a single semantic category) or spatial (geometric figures). Bilateral ventrolateral PFC activations were seen for the spatial domain, while the verbal domain showed only left ventrolateral PFC. A conjunction analysis revealed that the two domains recruited a common region of left ventrolateral PFC. The data support a central role of left PFC in inductive reasoning. Importantly, they also suggest that both process and domain shape the localization of reasoning in the brain. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Interaction dynamics of multiple autonomous mobile robots in bounded spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
The Latent Structure of Spatial Skills and Mathematics: A Replication of the Two-Factor Model
ERIC Educational Resources Information Center
Mix, Kelly S.; Levine, Susan C.; Cheng, Yi-Lang; Young, Christopher J.; Hambrick, David Z.; Konstantopoulos, Spyros
2017-01-01
In a previous study, Mix et al. (2016) reported that spatial skill and mathematics were composed of 2 highly correlated, domain-specific factors, with a few cross-domain loadings. The overall structure was consistent across grade (kindergarten, 3rd grade, 6th grade), but the cross-domain loadings varied with age. The present study sought to…
Optimal mapping of irregular finite element domains to parallel processors
NASA Technical Reports Server (NTRS)
Flower, J.; Otto, S.; Salama, M.
1987-01-01
Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.
The Effects of City Streets on an Urban Disease Vector
Barbu, Corentin M.; Hong, Andrew; Manne, Jennifer M.; Small, Dylan S.; Quintanilla Calderón, Javier E.; Sethuraman, Karthik; Quispe-Machaca, Víctor; Ancca-Juárez, Jenny; Cornejo del Carpio, Juan G.; Málaga Chavez, Fernando S.; Náquira, César; Levy, Michael Z.
2013-01-01
With increasing urbanization vector-borne diseases are quickly developing in cities, and urban control strategies are needed. If streets are shown to be barriers to disease vectors, city blocks could be used as a convenient and relevant spatial unit of study and control. Unfortunately, existing spatial analysis tools do not allow for assessment of the impact of an urban grid on the presence of disease agents. Here, we first propose a method to test for the significance of the impact of streets on vector infestation based on a decomposition of Moran's spatial autocorrelation index; and second, develop a Gaussian Field Latent Class model to finely describe the effect of streets while controlling for cofactors and imperfect detection of vectors. We apply these methods to cross-sectional data of infestation by the Chagas disease vector Triatoma infestans in the city of Arequipa, Peru. Our Moran's decomposition test reveals that the distribution of T. infestans in this urban environment is significantly constrained by streets (p<0.05). With the Gaussian Field Latent Class model we confirm that streets provide a barrier against infestation and further show that greater than 90% of the spatial component of the probability of vector presence is explained by the correlation among houses within city blocks. The city block is thus likely to be an appropriate spatial unit to describe and control T. infestans in an urban context. Characteristics of the urban grid can influence the spatial dynamics of vector borne disease and should be considered when designing public health policies. PMID:23341756
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Singh, Baneshwar; Minick, Kevan J.; Strickland, Michael S.; Wickings, Kyle G.; Crippen, Tawni L.; Tarone, Aaron M.; Benbow, M. Eric; Sufrin, Ness; Tomberlin, Jeffery K.; Pechal, Jennifer L.
2018-01-01
As vertebrate carrion decomposes, there is a release of nutrient-rich fluids into the underlying soil, which can impact associated biological community structure and function. How these changes alter soil biogeochemical cycles is relatively unknown and may prove useful in the identification of carrion decomposition islands that have long lasting, focal ecological effects. This study investigated the spatial (0, 1, and 5 m) and temporal (3–732 days) dynamics of human cadaver decomposition on soil bacterial and arthropod community structure and microbial function. We observed strong evidence of a predictable response to cadaver decomposition that varies over space for soil bacterial and arthropod community structure, carbon (C) mineralization and microbial substrate utilization patterns. In the presence of a cadaver (i.e., 0 m samples), the relative abundance of Bacteroidetes and Firmicutes was greater, while the relative abundance of Acidobacteria, Chloroflexi, Gemmatimonadetes, and Verrucomicrobia was lower when compared to samples at 1 and 5 m. Micro-arthropods were more abundant (15 to 17-fold) in soils collected at 0 m compared to either 1 or 5 m, but overall, micro-arthropod community composition was unrelated to either bacterial community composition or function. Bacterial community structure and microbial function also exhibited temporal relationships, whereas arthropod community structure did not. Cumulative precipitation was more effective in predicting temporal variations in bacterial abundance and microbial activity than accumulated degree days. In the presence of the cadaver (i.e., 0 m samples), the relative abundance of Actinobacteria increased significantly with cumulative precipitation. Furthermore, soil bacterial communities and C mineralization were sensitive to the introduction of human cadavers as they diverged from baseline levels and did not recover completely in approximately 2 years. These data are valuable for understanding ecosystem function surrounding carrion decomposition islands and can be applicable to environmental bio-monitoring and forensic sciences. PMID:29354106
Singh, Baneshwar; Minick, Kevan J; Strickland, Michael S; Wickings, Kyle G; Crippen, Tawni L; Tarone, Aaron M; Benbow, M Eric; Sufrin, Ness; Tomberlin, Jeffery K; Pechal, Jennifer L
2017-01-01
As vertebrate carrion decomposes, there is a release of nutrient-rich fluids into the underlying soil, which can impact associated biological community structure and function. How these changes alter soil biogeochemical cycles is relatively unknown and may prove useful in the identification of carrion decomposition islands that have long lasting, focal ecological effects. This study investigated the spatial (0, 1, and 5 m) and temporal (3-732 days) dynamics of human cadaver decomposition on soil bacterial and arthropod community structure and microbial function. We observed strong evidence of a predictable response to cadaver decomposition that varies over space for soil bacterial and arthropod community structure, carbon (C) mineralization and microbial substrate utilization patterns. In the presence of a cadaver (i.e., 0 m samples), the relative abundance of Bacteroidetes and Firmicutes was greater, while the relative abundance of Acidobacteria, Chloroflexi, Gemmatimonadetes, and Verrucomicrobia was lower when compared to samples at 1 and 5 m. Micro-arthropods were more abundant (15 to 17-fold) in soils collected at 0 m compared to either 1 or 5 m, but overall, micro-arthropod community composition was unrelated to either bacterial community composition or function. Bacterial community structure and microbial function also exhibited temporal relationships, whereas arthropod community structure did not. Cumulative precipitation was more effective in predicting temporal variations in bacterial abundance and microbial activity than accumulated degree days. In the presence of the cadaver (i.e., 0 m samples), the relative abundance of Actinobacteria increased significantly with cumulative precipitation. Furthermore, soil bacterial communities and C mineralization were sensitive to the introduction of human cadavers as they diverged from baseline levels and did not recover completely in approximately 2 years. These data are valuable for understanding ecosystem function surrounding carrion decomposition islands and can be applicable to environmental bio-monitoring and forensic sciences.
3D Staggered-Grid Finite-Difference Simulation of Acoustic Waves in Turbulent Moving Media
NASA Astrophysics Data System (ADS)
Symons, N. P.; Aldridge, D. F.; Marlin, D.; Wilson, D. K.; Sullivan, P.; Ostashev, V.
2003-12-01
Acoustic wave propagation in a three-dimensional heterogeneous moving atmosphere is accurately simulated with a numerical algorithm recently developed under the DOD Common High Performance Computing Software Support Initiative (CHSSI). Sound waves within such a dynamic environment are mathematically described by a set of four, coupled, first-order partial differential equations governing small-amplitude fluctuations in pressure and particle velocity. The system is rigorously derived from fundamental principles of continuum mechanics, ideal-fluid constitutive relations, and reasonable assumptions that the ambient atmospheric motion is adiabatic and divergence-free. An explicit, time-domain, finite-difference (FD) numerical scheme is used to solve the system for both pressure and particle velocity wavefields. The atmosphere is characterized by 3D gridded models of sound speed, mass density, and the three components of the wind velocity vector. Dependent variables are stored on staggered spatial and temporal grids, and centered FD operators possess 2nd-order and 4th-order space/time accuracy. Accurate sound wave simulation is achieved provided grid intervals are chosen appropriately. The gridding must be fine enough to reduce numerical dispersion artifacts to an acceptable level and maintain stability. The algorithm is designed to execute on parallel computational platforms by utilizing a spatial domain-decomposition strategy. Currently, the algorithm has been validated on four different computational platforms, and parallel scalability of approximately 85% has been demonstrated. Comparisons with analytic solutions for uniform and vertically stratified wind models indicate that the FD algorithm generates accurate results with either a vanishing pressure or vanishing vertical-particle velocity boundary condition. Simulations are performed using a kinematic turbulence wind profile developed with the quasi-wavelet method. In addition, preliminary results are presented using high-resolution 3D dynamic turbulent flowfields generated by a large-eddy simulation model of a stably stratified planetary boundary layer. Sandia National Laboratories is a operated by Sandia Corporation, a Lockheed Martin Company, for the USDOE under contract 94-AL85000.
Wang, B; Switowski, K; Cojocaru, C; Roppo, V; Sheng, Y; Scalora, M; Kisielewski, J; Pawlak, D; Vilaseca, R; Akhouayri, H; Krolikowski, W; Trull, J
2018-01-22
We present an indirect, non-destructive optical method for domain statistic characterization in disordered nonlinear crystals having homogeneous refractive index and spatially random distribution of ferroelectric domains. This method relies on the analysis of the wave-dependent spatial distribution of the second harmonic, in the plane perpendicular to the optical axis in combination with numerical simulations. We apply this technique to the characterization of two different media, Calcium Barium Niobate and Strontium Barium Niobate, with drastically different statistical distributions of ferroelectric domains.
Power spectral ensity of markov texture fields
NASA Technical Reports Server (NTRS)
Shanmugan, K. S.; Holtzman, J. C.
1984-01-01
Texture is an important image characteristic. A variety of spatial domain techniques were proposed for extracting and utilizing textural features for segmenting and classifying images. for the most part, these spatial domain techniques are ad hos in nature. A markov random field model for image texture is discussed. A frequency domain description of image texture is derived in terms of the power spectral density. This model is used for designing optimum frequency domain filters for enhancing, restoring and segmenting images based on their textural properties.
The neural basis of novelty and appropriateness in processing of creative chunk decomposition.
Huang, Furong; Fan, Jin; Luo, Jing
2015-06-01
Novelty and appropriateness have been recognized as the fundamental features of creative thinking. However, the brain mechanisms underlying these features remain largely unknown. In this study, we used event-related functional magnetic resonance imaging (fMRI) to dissociate these mechanisms in a revised creative chunk decomposition task in which participants were required to perform different types of chunk decomposition that systematically varied in novelty and appropriateness. We found that novelty processing involved functional areas for procedural memory (caudate), mental rewarding (substantia nigra, SN), and visual-spatial processing, whereas appropriateness processing was mediated by areas for declarative memory (hippocampus), emotional arousal (amygdala), and orthography recognition. These results indicate that non-declarative and declarative memory systems may jointly contribute to the two fundamental features of creative thinking. Copyright © 2015 Elsevier Inc. All rights reserved.
Influence of Heat Treatments on Microstructure and Magnetic Domains in Duplex Stainless Steel S31803
NASA Astrophysics Data System (ADS)
Dille, Jean; Pacheco, Clara Johanna; Camerini, Cesar Giron; Malet, Loic Charles; Nysten, Bernard; Pereira, Gabriela Ribeiro; De Almeida, Luiz Henrique; Alcoforado Rebello, João Marcos
2018-06-01
The influence of heat treatments on microstructure and magnetic domains in duplex stainless steel S31803 is studied using an innovative structural characterization protocol. Electron backscatter diffraction (EBSD) maps as well as magnetic force microscopy (MFM) images acquired on the same region of the sample, before and after heat treatment, are compared. The influence of heat treatments on the phase volumetric fractions is studied, and several structural modifications after heat treatment are highlighted. Three different mechanisms for the decomposition of ferrite into sigma phase and secondary austenite are observed during annealing at 800 °C. MFM analysis reveals that a variety of magnetic domain patterns can exist in one ferrite grain.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.
Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-11-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains
Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano
2016-01-01
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363
Confinement in F4 Exceptional Gauge Group Using Domain Structures
NASA Astrophysics Data System (ADS)
Rafibakhsh, Shahnoosh; Shahlaei, Amir
2017-03-01
We calculate the potential between static quarks in the fundamental representation of the F4 exceptional gauge group using domain structures of the thick center vortex model. As non-trivial center elements are absent, the asymptotic string tension is lost while an intermediate linear potential is observed. SU(2) is a subgroup of F4. Investigating the decomposition of the 26 dimensional representation of F4 to the SU(2) representations, might explain what accounts for the intermediate linear potential, in the exceptional groups with no center element.
Supra-domains: evolutionary units larger than single protein domains.
Vogel, Christine; Berzuini, Carlo; Bashton, Matthew; Gough, Julian; Teichmann, Sarah A
2004-02-20
Domains are the evolutionary units that comprise proteins, and most proteins are built from more than one domain. Domains can be shuffled by recombination to create proteins with new arrangements of domains. Using structural domain assignments, we examined the combinations of domains in the proteins of 131 completely sequenced organisms. We found two-domain and three-domain combinations that recur in different protein contexts with different partner domains. The domains within these combinations have a particular functional and spatial relationship. These units are larger than individual domains and we term them "supra-domains". Amongst the supra-domains, we identified some 1400 (1203 two-domain and 166 three-domain) combinations that are statistically significantly over-represented relative to the occurrence and versatility of the individual component domains. Over one-third of all structurally assigned multi-domain proteins contain these over-represented supra-domains. This means that investigation of the structural and functional relationships of the domains forming these popular combinations would be particularly useful for an understanding of multi-domain protein function and evolution as well as for genome annotation. These and other supra-domains were analysed for their versatility, duplication, their distribution across the three kingdoms of life and their functional classes. By examining the three-dimensional structures of several examples of supra-domains in different biological processes, we identify two basic types of spatial relationships between the component domains: the combined function of the two domains is such that either the geometry of the two domains is crucial and there is a tight constraint on the interface, or the precise orientation of the domains is less important and they are spatially separate. Frequently, the role of the supra-domain becomes clear only once the three-dimensional structure is known. Since this is the case for only a quarter of the supra-domains, we provide a list of the most important unknown supra-domains as potential targets for structural genomics projects.
A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA
NASA Astrophysics Data System (ADS)
Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan
2016-11-01
The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Stabilization and control of distributed systems with time-dependent spatial domains
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1990-01-01
This paper considers the problem of the stabilization and control of distributed systems with time-dependent spatial domains. The evolution of the spatial domains with time is described by a finite-dimensional system of ordinary differential equations, while the distributed systems are described by first-order or second-order linear evolution equations defined on appropriate Hilbert spaces. First, results pertaining to the existence and uniqueness of solutions of the system equations are presented. Then, various optimal control and stabilization problems are considered. The paper concludes with some examples which illustrate the application of the main results.
The Structure of Working Memory Abilities across the Adult Life Span
Hale, Sandra; Rose, Nathan S.; Myerson, Joel; Strube, Michael J; Sommers, Mitchell; Tye-Murray, Nancy; Spehar, Brent
2010-01-01
The present study addresses three questions regarding age differences in working memory: (1) whether performance on complex span tasks decreases as a function of age at a faster rate than performance on simple span tasks; (2) whether spatial working memory decreases at a faster rate than verbal working memory; and (3) whether the structure of working memory abilities is different for different age groups. Adults, ages 20–89 (n=388), performed three simple and three complex verbal span tasks and three simple and three complex spatial memory tasks. Performance on the spatial tasks decreased at faster rates as a function of age than performance on the verbal tasks, but within each domain, performance on complex and simple span tasks decreased at the same rates. Confirmatory factor analyses revealed that domain-differentiated models yielded better fits than models involving domain-general constructs, providing further evidence of the need to distinguish verbal and spatial working memory abilities. Regardless of which domain-differentiated model was examined, and despite the faster rates of decrease in the spatial domain, age group comparisons revealed that the factor structure of working memory abilities was highly similar in younger and older adults and showed no evidence of age-related dedifferentiation. PMID:21299306
Spatial and temporal characteristics of elevated temperatures in municipal solid waste landfills.
Jafari, Navid H; Stark, Timothy D; Thalhamer, Todd
2017-01-01
Elevated temperatures in waste containment facilities can pose health, environmental, and safety risks because they generate toxic gases, pressures, leachate, and heat. In particular, MSW landfills undergo changes in behavior that typically follow a progression of indicators, e.g., elevated temperatures, changes in gas composition, elevated gas pressures, increased leachate migration, slope movement, and unusual and rapid surface settlement. This paper presents two MSW landfill case studies that show the spatial and time-lapse movements of these indicators and identify four zones that illustrate the transition of normal MSW decomposition to the region of elevated temperatures. The spatial zones are gas front, temperature front, and smoldering front. The gas wellhead temperature and the ratio of CH 4 to CO 2 are used to delineate the boundaries between normal MSW decomposition, gas front, and temperature front. The ratio of CH 4 to CO 2 and carbon monoxide concentrations along with settlement strain rates and subsurface temperatures are used to delineate the smoldering front. In addition, downhole temperatures can be used to estimate the rate of movement of elevated temperatures, which is important for isolating and containing the elevated temperature in a timely manner. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial Distribution of Resonance in the Velocity Field for Transonic Flow over a Rectangular Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.
Pulse-burst particle image velocimetry (PIV) has been used to acquire time-resolved data at 37.5 kHz of the flow over a finite-width rectangular cavity at Mach 0.8. Power spectra of the PIV data reveal four resonance modes that match the frequencies detected simultaneously using high-frequency wall pressure sensors but whose magnitudes exhibit spatial dependence throughout the cavity. Spatio-temporal cross-correlations of velocity to pressure were calculated after bandpass filtering for specific resonance frequencies. Cross-correlation magnitudes express the distribution of resonance energy, revealing local maxima and minima at the edges of the shear layer attributable to wave interference between downstream- and upstream-propagating disturbances.more » Turbulence intensities were calculated using a triple decomposition and are greatest in the core of the shear layer for higher modes, where resonant energies ordinarily are lower. Most of the energy for the lowest mode lies in the recirculation region and results principally from turbulence rather than resonance. Together, the velocity-pressure cross-correlations and the triple-decomposition turbulence intensities explain the sources of energy identified in the spatial distributions of power spectra amplitudes.« less
Spatial Distribution of Resonance in the Velocity Field for Transonic Flow over a Rectangular Cavity
Beresh, Steven J.; Wagner, Justin L.; Casper, Katya M.; ...
2017-07-27
Pulse-burst particle image velocimetry (PIV) has been used to acquire time-resolved data at 37.5 kHz of the flow over a finite-width rectangular cavity at Mach 0.8. Power spectra of the PIV data reveal four resonance modes that match the frequencies detected simultaneously using high-frequency wall pressure sensors but whose magnitudes exhibit spatial dependence throughout the cavity. Spatio-temporal cross-correlations of velocity to pressure were calculated after bandpass filtering for specific resonance frequencies. Cross-correlation magnitudes express the distribution of resonance energy, revealing local maxima and minima at the edges of the shear layer attributable to wave interference between downstream- and upstream-propagating disturbances.more » Turbulence intensities were calculated using a triple decomposition and are greatest in the core of the shear layer for higher modes, where resonant energies ordinarily are lower. Most of the energy for the lowest mode lies in the recirculation region and results principally from turbulence rather than resonance. Together, the velocity-pressure cross-correlations and the triple-decomposition turbulence intensities explain the sources of energy identified in the spatial distributions of power spectra amplitudes.« less
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Decoding rule search domain in the left inferior frontal gyrus
Babcock, Laura; Vallesi, Antonino
2018-01-01
Traditionally, the left hemisphere has been thought to extract mainly verbal patterns of information, but recent evidence has shown that the left Inferior Frontal Gyrus (IFG) is active during inductive reasoning in both the verbal and spatial domains. We aimed to understand whether the left IFG supports inductive reasoning in a domain-specific or domain-general fashion. To do this we used Multi-Voxel Pattern Analysis to decode the representation of domain during a rule search task. Thirteen participants were asked to extract the rule underlying streams of letters presented in different spatial locations. Each rule was either verbal (letters forming words) or spatial (positions forming geometric figures). Our results show that domain was decodable in the left prefrontal cortex, suggesting that this region represents domain-specific information, rather than processes common to the two domains. A replication study with the same participants tested two years later confirmed these findings, though the individual representations changed, providing evidence for the flexible nature of representations. This study extends our knowledge on the neural basis of goal-directed behaviors and on how information relevant for rule extraction is flexibly mapped in the prefrontal cortex. PMID:29547623
Age-related similarities and differences in monitoring spatial cognition.
Ariel, Robert; Moffat, Scott D
2018-05-01
Spatial cognitive performance is impaired in later adulthood but it is unclear whether the metacognitive processes involved in monitoring spatial cognitive performance are also compromised. Inaccurate monitoring could affect whether people choose to engage in tasks that require spatial thinking and also the strategies they use in spatial domains such as navigation. The current experiment examined potential age differences in monitoring spatial cognitive performance in a variety of spatial domains including visual-spatial working memory, spatial orientation, spatial visualization, navigation, and place learning. Younger and older adults completed a 2D mental rotation test, 3D mental rotation test, paper folding test, spatial memory span test, two virtual navigation tasks, and a cognitive mapping test. Participants also made metacognitive judgments of performance (confidence judgments, judgments of learning, or navigation time estimates) on each trial for all spatial tasks. Preference for allocentric or egocentric navigation strategies was also measured. Overall, performance was poorer and confidence in performance was lower for older adults than younger adults. In most spatial domains, the absolute and relative accuracy of metacognitive judgments was equivalent for both age groups. However, age differences in monitoring accuracy (specifically relative accuracy) emerged in spatial tasks involving navigation. Confidence in navigating for a target location also mediated age differences in allocentric navigation strategy use. These findings suggest that with the possible exception of navigation monitoring, spatial cognition may be spared from age-related decline even though spatial cognition itself is impaired in older age.
Discussion summary: Fictitious domain methods
NASA Technical Reports Server (NTRS)
Glowinski, Rowland; Rodrigue, Garry
1991-01-01
Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.
Rosetti, Carla M; Mangiarotti, Agustín; Wilke, Natalia
2017-05-01
In model lipid membranes with phase coexistence, domain sizes distribute in a very wide range, from the nanometer (reported in vesicles and supported films) to the micrometer (observed in many model membranes). Domain growth by coalescence and Ostwald ripening is slow (minutes to hours), the domain size being correlated with the size of the capture region. Domain sizes thus strongly depend on the number of domains which, in the case of a nucleation process, depends on the oversaturation of the system, on line tension and on the perturbation rate in relation to the membrane dynamics. Here, an overview is given of the factors that affect nucleation or spinodal decomposition and domain growth, and their influence on the distribution of domain sizes in different model membranes is discussed. The parameters analyzed respond to very general physical rules, and we therefore propose a similar behavior for the rafts in the plasma membrane of cells, but with obstructed mobility and with a continuously changing environment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
Vectorial finite elements for solving the radiative transfer equation
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.
2018-06-01
The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.
NASA Astrophysics Data System (ADS)
Chang, Der-Chen; Markina, Irina; Wang, Wei
2016-09-01
The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
3-D modeling of ductile tearing using finite elements: Computational aspects and techniques
NASA Astrophysics Data System (ADS)
Gullerud, Arne Stewart
This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.
NASA Astrophysics Data System (ADS)
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-07-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening.
Shao, X. H.; Zheng, S. J.; Chen, D.; Jin, Q. Q.; Peng, Z. Z.; Ma, X. L.
2016-01-01
The high hardness or yield strength of an alloy is known to benefit from the presence of small-scale precipitation, whose hardening effect is extensively applied in various engineering materials. Stability of the precipitates is of critical importance in maintaining the high performance of a material under mechanical loading. The long period stacking ordered (LPSO) structures play an important role in tuning the mechanical properties of an Mg-alloy. Here, we report deformation twinning induces decomposition of lamellar LPSO structures and their re-precipitation in an Mg-Zn-Y alloy. Using atomic resolution scanning transmission electron microscopy (STEM), we directly illustrate that the misfit dislocations at the interface between the lamellar LPSO structure and the deformation twin is corresponding to the decomposition and re-precipitation of LPSO structure, owing to dislocation effects on redistribution of Zn/Y atoms. This finding demonstrates that deformation twinning could destabilize complex precipitates. An occurrence of decomposition and re-precipitation, leading to a variant spatial distribution of the precipitates under plastic loading, may significantly affect the precipitation strengthening. PMID:27435638
Pochikalov, A V; Karelin, D V
2014-01-01
Although many recently published original papers and reviews deal with plant matter decomposition rates and their controls, we are still very short in understanding of these processes in boreal and high latiude plant communities, especially in permafrost areas of our planet. First and foremost, this is holds true for winter period. Here, we present the results of 2-year field observations in south taiga and south shrub tundra ecosystems in European Russia. We pioneered in simultaneous application of two independent methods: classic mass loss estimation by litter-bag technique, and direct measurement of CO2 emission (respiration) of the same litter bags with different types of dead plant matter. Such an approach let us to reconstruct intra-seasonal dynamics of decomposition rates of the main tundra litter fractions with high temporal resolution, to estimate the partial role of different seasons and defragmentation in the process of plant matter decomposition, and to determine its factors under different temporal scale.
Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi
2017-03-01
Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Inundation, vegetation, and sediment effects on litter decomposition in Pacific Coast tidal marshes
Janousek, Christopher; Buffington, Kevin J.; Guntenspergen, Glenn R.; Thorne, Karen M.; Dugger, Bruce D.; Takekawa, John Y.
2017-01-01
The cycling and sequestration of carbon are important ecosystem functions of estuarine wetlands that may be affected by climate change. We conducted experiments across a latitudinal and climate gradient of tidal marshes in the northeast Pacific to evaluate the effects of climate- and vegetation-related factors on litter decomposition. We manipulated tidal exposure and litter type in experimental mesocosms at two sites and used variation across marsh landscapes at seven sites to test for relationships between decomposition and marsh elevation, soil temperature, vegetation composition, litter quality, and sediment organic content. A greater than tenfold increase in manipulated tidal inundation resulted in small increases in decomposition of roots and rhizomes of two species, but no significant change in decay rates of shoots of three other species. In contrast, across the latitudinal gradient, decomposition rates of Salicornia pacifica litter were greater in high marsh than in low marsh. Rates were not correlated with sediment temperature or organic content, but were associated with plant assemblage structure including above-ground cover, species composition, and species richness. Decomposition rates also varied by litter type; at two sites in the Pacific Northwest, the grasses Deschampsia cespitosa and Distichlis spicata decomposed more slowly than the forb S. pacifica. Our data suggest that elevation gradients and vegetation structure in tidal marshes both affect rates of litter decay, potentially leading to complex spatial patterns in sediment carbon dynamics. Climate change may thus have direct effects on rates of decomposition through increased inundation from sea-level rise and indirect effects through changing plant community composition.
Mode Analyses of Gyrokinetic Simulations of Plasma Microturbulence
NASA Astrophysics Data System (ADS)
Hatch, David R.
This thesis presents analysis of the excitation and role of damped modes in gyrokinetic simulations of plasma microturbulence. In order to address this question, mode decompositions are used to analyze gyrokinetic simulation data. A mode decomposition can be constructed by projecting a nonlinearly evolved gyrokinetic distribution function onto a set of linear eigenmodes, or alternatively by constructing a proper orthogonal decomposition of the distribution function. POD decompositions are used to examine the role of damped modes in saturating ion temperature gradient driven turbulence. In order to identify the contribution of different modes to the energy sources and sinks, numerical diagnostics for a gyrokinetic energy quantity were developed for the GENE code. The use of these energy diagnostics in conjunction with POD mode decompositions demonstrates that ITG turbulence saturates largely through dissipation by damped modes at the same perpendicular spatial scales as those of the driving instabilities. This defines a picture of turbulent saturation that is very different from both traditional hydrodynamic scenarios and also many common theories for the saturation of plasma turbulence. POD mode decompositions are also used to examine the role of subdominant modes in causing magnetic stochasticity in electromagnetic gyrokinetic simulations. It is shown that the magnetic stochasticity, which appears to be ubiquitous in electromagnetic microturbulence, is caused largely by subdominant modes with tearing parity. The application of higher-order singular value decomposition (HOSVD) to the full distribution function from gyrokinetic simulations is presented. This is an effort to demonstrate the ability to characterize and extract insight from a very large, complex, and high-dimensional data-set - the 5-D (plus time) gyrokinetic distribution function.
How Does the Sparse Memory "Engram" Neurons Encode the Memory of a Spatial-Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns.
NASA Astrophysics Data System (ADS)
Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong
2017-09-01
Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.
MO-FG-204-01: Improved Noise Suppression for Dual-Energy CT Through Entropy Minimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, M; Zhu, L
2015-06-15
Purpose: In dual energy CT (DECT), noise amplification during signal decomposition significantly limits the utility of basis material images. Since clinically relevant objects contain a limited number of materials, we propose to suppress noise for DECT based on image entropy minimization. An adaptive weighting scheme is employed during noise suppression to improve decomposition accuracy with limited effect on spatial resolution and image texture preservation. Methods: From decomposed images, we first generate a 2D plot of scattered data points, using basis material densities as coordinates. Data points representing the same material generate a highly asymmetric cluster. We orient an axis bymore » minimizing the entropy in a 1D histogram of these points projected onto the axis. To suppress noise, we replace pixel values of decomposed images with center-of-mass values in the direction perpendicular to the optimal axis. To limit errors due to cluster overlap, we weight each data point’s contribution based on its high and low energy CT values and location within the image. The proposed method’s performance is assessed on physical phantom studies. Electron density is used as the quality metric for decomposition accuracy. Our results are compared to those without noise suppression and with a recently developed iterative method. Results: The proposed method reduces noise standard deviations of the decomposed images by at least one order of magnitude. On the Catphan phantom, this method greatly preserves the spatial resolution and texture of the CT images and limits induced error in measured electron density to below 1.2%. In the head phantom study, the proposed method performs the best in retaining fine, intricate structures. Conclusion: The entropy minimization based algorithm with adaptive weighting substantially reduces DECT noise while preserving image spatial resolution and texture. Future investigations will include extensive investigations on material decomposition accuracy that go beyond the current electron density calculations. This work was supported in part by the National Institutes of Health (NIH) under Grant Number R21 EB012700.« less
SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)
NASA Astrophysics Data System (ADS)
Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin
2017-02-01
With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z < 0.14 and coverage of at least 1.5 effective radii for a spatial resolution of 2.5 arcsec full width at half-maximum and field of view of > 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.
Working Memory Systems in the Rat.
Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D
2016-02-08
A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2018-01-25
The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
Pair correlation functions for identifying spatial correlation in discrete domains
NASA Astrophysics Data System (ADS)
Gavagnin, Enrico; Owen, Jennifer P.; Yates, Christian A.
2018-06-01
Identifying and quantifying spatial correlation are important aspects of studying the collective behavior of multiagent systems. Pair correlation functions (PCFs) are powerful statistical tools that can provide qualitative and quantitative information about correlation between pairs of agents. Despite the numerous PCFs defined for off-lattice domains, only a few recent studies have considered a PCF for discrete domains. Our work extends the study of spatial correlation in discrete domains by defining a new set of PCFs using two natural and intuitive definitions of distance for a square lattice: the taxicab and uniform metric. We show how these PCFs improve upon previous attempts and compare between the quantitative data acquired. We also extend our definitions of the PCF to other types of regular tessellation that have not been studied before, including hexagonal, triangular, and cuboidal. Finally, we provide a comprehensive PCF for any tessellation and metric, allowing investigation of spatial correlation in irregular lattices for which recognizing correlation is less intuitive.
Preliminary frequency-domain analysis for the reconstructed spatial resolution of muon tomography
NASA Astrophysics Data System (ADS)
Yu, B.; Zhao, Z.; Wang, X.; Wang, Y.; Wu, D.; Zeng, Z.; Zeng, M.; Yi, H.; Luo, Z.; Yue, X.; Cheng, J.
2014-11-01
Muon tomography is an advanced technology to non-destructively detect high atomic number materials. It exploits the multiple Coulomb scattering information of muon to reconstruct the scattering density image of the traversed object. Because of the statistics of muon scattering, the measurement error of system and the data incompleteness, the reconstruction is always accompanied with a certain level of interference, which will influence the reconstructed spatial resolution. While statistical noises can be reduced by extending the measuring time, system parameters determine the ultimate spatial resolution that one system can reach. In this paper, an effective frequency-domain model is proposed to analyze the reconstructed spatial resolution of muon tomography. The proposed method modifies the resolution analysis in conventional computed tomography (CT) to fit the different imaging mechanism in muon scattering tomography. The measured scattering information is described in frequency domain, then a relationship between the measurements and the original image is proposed in Fourier domain, which is named as "Muon Central Slice Theorem". Furthermore, a preliminary analytical expression of the ultimate reconstructed spatial is derived, and the simulations are performed for validation. While the method is able to predict the ultimate spatial resolution of a given system, it can also be utilized for the optimization of system design and construction.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Fiber optic sensing technology for detecting gas hydrate formation and decomposition.
Rawn, C J; Leeman, J R; Ulrich, S M; Alford, J E; Phelps, T J; Madden, M E
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH(4)-H(2)O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Fiber optic sensing technology for detecting gas hydrate formation and decomposition
NASA Astrophysics Data System (ADS)
Rawn, C. J.; Leeman, J. R.; Ulrich, S. M.; Alford, J. E.; Phelps, T. J.; Madden, M. E.
2011-02-01
A fiber optic-based distributed sensing system (DSS) has been integrated with a large volume (72 l) pressure vessel providing high spatial resolution, time-resolved, 3D measurement of hybrid temperature-strain (TS) values within experimental sediment-gas hydrate systems. Areas of gas hydrate formation (exothermic) and decomposition (endothermic) can be characterized through this proxy by time series analysis of discrete data points collected along the length of optical fibers placed within a sediment system. Data are visualized as an animation of TS values along the length of each fiber over time. Experiments conducted in the Seafloor Process Simulator at Oak Ridge National Laboratory clearly indicate hydrate formation and dissociation events at expected pressure-temperature conditions given the thermodynamics of the CH4-H2O system. The high spatial resolution achieved with fiber optic technology makes the DSS a useful tool for visualizing time-resolved formation and dissociation of gas hydrates in large-scale sediment experiments.
Separation of distinct photoexcitation species in femtosecond transient absorption microscopy
Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane; ...
2016-02-03
Femtosecond transient absorption microscopy is a novel chemical imaging capability with simultaneous high spatial and temporal resolution. Although several powerful data analysis approaches have been developed and successfully applied to separate distinct chemical species in such images, the application of such analysis to distinguish different photoexcited species is rare. In this paper, we demonstrate a combined approach based on phasor and linear decomposition analysis on a microscopic level that allows us to separate the contributions of both the excitons and free charge carriers in the observed transient absorption response of a composite organometallic lead halide perovskite film. We found spatialmore » regions where the transient absorption response was predominately a result of excitons and others where it was predominately due to charge carriers, and regions consisting of signals from both contributors. Lastly, quantitative decomposition of the transient absorption response curves further enabled us to reveal the relative contribution of each photoexcitation to the measured response at spatially resolved locations in the film.« less
Spatial Variability in Decomposition of Organic Carbon Along a Meandering River Floodplain
NASA Astrophysics Data System (ADS)
Sutfin, N. A.; Rowland, J. C.; Tfaily, M. M.; Bingol, A. K.; Washton, N.
2017-12-01
Rivers are an important component of the terrestrial carbon cycle and floodplains can provide significant storage of organic carbon. Quantification of long-term storage, however, requires determination of the residence time of sediment and the decomposition rate of organic carbon in floodplains. We use fourier transform ion cyclotron resonance (FTICR) mass spectrometry to examine the organic carbon compounds present in sediment within three floodplain settings: point bars, cutbanks, and abandoned channels. We define decomposition of organic carbon in floodplain sediment as the ratio between the number of protein versus lignin, which serve as proxies for microbial-derived and terrestrial-derived organic carbon, respectively. Samples were collected at 0-5 cm, 5-15cm, and 15-30 cm depth along four transects that span a longitudinal valley distance of 8 km on the East River near Crested Butte, CO. Although no significant trends in decomposition ratio exist longitudinally between the fours transects, floodplain settings exhibit significant differences. At shallow depths (0-5 cm), there are no significant differences among settings, with the exception of gravel portions of point bars below bankfull flow, where the highest decomposition is present. Conversely, cutbanks contain significantly lower decomposition ratios compared with point bars, gravel bars, and abandoned channels when considering all depth intervals. Pointbars exhibit significantly greater protein vs. lignin at the surface compared to greater depth. Higher decomposition ratios along abandoned channels and point bars suggest that frequent wetting and drying periods, abundant oxygen, and continuous downstream movement and decomposition of organic matter occurs within the channel. Lower decomposition ratios and consistent trends with depth along cutbanks, suggest that these stable surfaces serve as organic carbon reservoirs that could become an increased source of carbon to the channel with increasing bank erosion. Detailed differences of organic carbon compounds in sediments of cutbanks, point bars, and abandoned channel will be examined in September 2017 using nuclear magnetic resonance (NMR).
Zhou, Rong; Basile, Franco
2017-09-05
A method based on plasmon surface resonance absorption and heating was developed to perform a rapid on-surface protein thermal decomposition and digestion suitable for imaging mass spectrometry (MS) and/or profiling. This photothermal process or plasmonic thermal decomposition/digestion (plasmonic-TDD) method incorporates a continuous wave (CW) laser excitation and gold nanoparticles (Au-NPs) to induce known thermal decomposition reactions that cleave peptides and proteins specifically at the C-terminus of aspartic acid and at the N-terminus of cysteine. These thermal decomposition reactions are induced by heating a solid protein sample to temperatures between 200 and 270 °C for a short period of time (10-50 s per 200 μm segment) and are reagentless and solventless, and thus are devoid of sample product delocalization. In the plasmonic-TDD setup the sample is coated with Au-NPs and irradiated with 532 nm laser radiation to induce thermoplasmonic heating and bring about site-specific thermal decomposition on solid peptide/protein samples. In this manner the Au-NPs act as nanoheaters that result in a highly localized thermal decomposition and digestion of the protein sample that is independent of the absorption properties of the protein, making the method universally applicable to all types of proteinaceous samples (e.g., tissues or protein arrays). Several experimental variables were optimized to maximize product yield, and they include heating time, laser intensity, size of Au-NPs, and surface coverage of Au-NPs. Using optimized parameters, proof-of-principle experiments confirmed the ability of the plasmonic-TDD method to induce both C-cleavage and D-cleavage on several peptide standards and the protein lysozyme by detecting their thermal decomposition products with matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). The high spatial specificity of the plasmonic-TDD method was demonstrated by using a mask to digest designated sections of the sample surface with the heating laser and MALDI-MS imaging to map the resulting products. The solventless nature of the plasmonic-TDD method enabled the nonenzymatic on-surface digestion of proteins to proceed with undetectable delocalization of the resulting products from their precursor protein location. The advantages of this novel plasmonic-TDD method include short reaction times (<30 s/200 μm), compatibility with MALDI, universal sample compatibility, high spatial specificity, and localization of the digestion products. These advantages point to potential applications of this method for on-tissue protein digestion and MS-imaging/profiling for the identification of proteins, high-fidelity MS imaging of high molecular weight (>30 kDa) proteins, and the rapid analysis of formalin-fixed paraffin-embedded (FFPE) tissue samples.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures
NASA Astrophysics Data System (ADS)
Shatalov, Alexander V.
The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
NASA Technical Reports Server (NTRS)
Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.
2014-01-01
Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-angle Imaging Spectroradiomater (MISR) provide regular aerosol observations with global coverage. It is essential to examine the coherency between space- and ground-measured aerosol parameters in representing aerosol spatial and temporal variability, especially in the climate forcing and model validation context. In this paper, we introduce Maximum Covariance Analysis (MCA), also known as Singular Value Decomposition analysis as an effective way to compare correlated aerosol spatial and temporal patterns between satellite measurements and AERONET data. This technique not only successfully extracts the variability of major aerosol regimes but also allows the simultaneous examination of the aerosol variability both spatially and temporally. More importantly, it well accommodates the sparsely distributed AERONET data, for which other spectral decomposition methods, such as Principal Component Analysis, do not yield satisfactory results. The comparison shows overall good agreement between MODIS/MISR and AERONET AOD variability. The correlations between the first three modes of MCA results for both MODIS/AERONET and MISR/ AERONET are above 0.8 for the full data set and above 0.75 for the AOD anomaly data. The correlations between MODIS and MISR modes are also quite high (greater than 0.9). We also examine the extent of spatial agreement between satellite and AERONET AOD data at the selected stations. Some sites with disagreements in the MCA results, such as Kanpur, also have low spatial coherency. This should be associated partly with high AOD spatial variability and partly with uncertainties in satellite retrievals due to the seasonally varying aerosol types and surface properties.
2009-01-01
Background The isotopic composition of generalist consumers may be expected to vary in space as a consequence of spatial heterogeneity in isotope ratios, the abundance of resources, and competition. We aim to account for the spatial variation in the carbon and nitrogen isotopic composition of a generalized predatory species across a 500 ha. tropical rain forest landscape. We test competing models to account for relative influence of resources and competitors to the carbon and nitrogen isotopic enrichment of gypsy ants (Aphaenogaster araneoides), taking into account site-specific differences in baseline isotope ratios. Results We found that 75% of the variance in the fraction of 15N in the tissue of A. araneoides was accounted by one environmental parameter, the concentration of soil phosphorus. After taking into account landscape-scale variation in baseline resources, the most parsimonious model indicated that colony growth and leaf litter biomass accounted for nearly all of the variance in the δ15N discrimination factor, whereas the δ13C discrimination factor was most parsimoniously associated with colony size and the rate of leaf litter decomposition. There was no indication that competitor density or diversity accounted for spatial differences in the isotopic composition of gypsy ants. Conclusion Across a 500 ha. landscape, soil phosphorus accounted for spatial variation in baseline nitrogen isotope ratios. The δ15N discrimination factor of a higher order consumer in this food web was structured by bottom-up influences - the quantity and decomposition rate of leaf litter. Stable isotope studies on the trophic biology of consumers may benefit from explicit spatial design to account for edaphic properties that alter the baseline at fine spatial grains. PMID:19930701
Within outlying mean indexes: refining the OMI analysis for the realized niche decomposition.
Karasiewicz, Stéphane; Dolédec, Sylvain; Lefebvre, Sébastien
2017-01-01
The ecological niche concept has regained interest under environmental change (e.g., climate change, eutrophication, and habitat destruction), especially to study the impacts on niche shift and conservatism. Here, we propose the within outlying mean indexes (WitOMI), which refine the outlying mean index (OMI) analysis by using its properties in combination with the K -select analysis species marginality decomposition. The purpose is to decompose the ecological niche into subniches associated with the experimental design, i.e., taking into account temporal and/or spatial subsets. WitOMI emphasize the habitat conditions that contribute (1) to the definition of species' niches using all available conditions and, at the same time, (2) to the delineation of species' subniches according to given subsets of dates or sites. The latter aspect allows addressing niche dynamics by highlighting the influence of atypical habitat conditions on species at a given time and/or space. Then, (3) the biological constraint exerted on the species subniche becomes observable within Euclidean space as the difference between the existing fundamental subniche and the realized subniche. We illustrate the decomposition of published OMI analyses, using spatial and temporal examples. The species assemblage's subniches are comparable to the same environmental gradient, producing a more accurate and precise description of the assemblage niche distribution under environmental change. The WitOMI calculations are available in the open-access R package "subniche."
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
Wavelet-domain de-noising of OCT images of human brain malignant glioma
NASA Astrophysics Data System (ADS)
Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.
2018-04-01
We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.
Spectral element method for elastic and acoustic waves in frequency domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min
Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the usemore » of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.« less
NASA Astrophysics Data System (ADS)
Kumar, Ashok; Nunley, Hayden; Marino, Alberto
2016-05-01
Quantum noise reduction (QNR) below the standard quantum limit (SQL) has been a subject of interest for the past two to three decades due to its wide range of applications in quantum metrology and quantum information processing. To date, most of the attention has focused on the study of QNR in the temporal domain. However, many areas in quantum optics, specifically in quantum imaging, could benefit from QNR not only in the temporal domain but also in the spatial domain. With the use of a high quantum efficiency electron multiplier charge coupled device (EMCCD) camera, we have observed spatial QNR below the SQL in bright narrowband twin light beams generated through a four-wave mixing (FWM) process in hot rubidium atoms. Owing to momentum conservation in this process, the twin beams are momentum correlated. This leads to spatial quantum correlations and spatial QNR. Our preliminary results show a spatial QNR of over 2 dB with respect to the SQL. Unlike previous results on spatial QNR with faint and broadband photon pairs from parametric down conversion (PDC), we demonstrate spatial QNR with spectrally and spatially narrowband bright light beams. The results obtained will be useful for atom light interaction based quantum protocols and quantum imaging. Work supported by the W.M. Keck Foundation.
NASA Astrophysics Data System (ADS)
Miehe, Gerhard; Lauterbach, Stefan; Kleebe, Hans-Joachim; Gurlo, Aleksander
2013-02-01
The high-resolution transmission electron microscopy (HR-TEM) is used to study, in situ, spatially resolved decomposition in individual nanocrystals of metal hydroxides and oxyhydroxides. This case study reports on the decomposition of indium hydroxide (c-In(OH)3) to bixbyite-type indium oxide (c-In2O3). The electron beam is focused onto a single cube-shaped In(OH)3 crystal of {100} morphology with ca. 35 nm edge length and a sequence of HR-TEM images was recorded during electron beam irradiation. The frame-by-frame analysis of video sequences allows for the in situ, time-resolved observation of the shape and orientation of the transformed crystals, which in turn enables the evaluation of the kinetics of c-In2O3 crystallization. Supplementary material (video of the transformation) related to this article can be found online at 10.1016/j.jssc.2012.09.022. After irradiation the shape of the parent cube-shaped crystal is preserved, however, its linear dimension (edge) is reduced by the factor 1.20. The corresponding spotted selected area electron diffraction (SAED) pattern representing zone [001] of c-In(OH)3 is transformed to a diffuse strongly textured ring-like pattern of c-In2O3 that indicates the transformed cube is no longer a single crystal but is disintegrated into individual c-In2O3 domains with the size of about 5-10 nm. The induction time of approximately 15 s is estimated from the time-resolved Fourier transforms. The volume fraction of the transformed phase (c-In2O3), calculated from the shrinkage of the parent c-In(OH)3 crystal in the recorded HR-TEM images, is used as a measure of the kinetics of c-In2O3 crystallization within the framework of Avrami-Erofeev formalism. The Avrami exponent of ˜3 is characteristic for a reaction mechanism with fast nucleation at the beginning of the reaction and subsequent three-dimensional growth of nuclei with a constant growth rate. The structural transformation path in reconstructive decomposition of c-In(OH)3 to c-In2O3 is discussed in terms of (i) the displacement of hydrogen atoms that lead to breaking the hydrogen bond between OH groups of [In(OH)6] octahedra and finally to their destabilization and (ii) transformation of the vertices-shared indium-oxygen octahedra in c-In(OH)3 to vertices- and edge-shared octahedra in c-In2O3.
NASA Astrophysics Data System (ADS)
Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire
2017-02-01
In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.
USDA-ARS?s Scientific Manuscript database
Spatial frequency domain imaging technique has recently been developed for determination of the optical properties of food and biological materials. However, accurate estimation of the optical property parameters by the technique is challenging due to measurement errors associated with signal acquis...
Wide-field high spatial frequency domain imaging of tissue microstructure
NASA Astrophysics Data System (ADS)
Lin, Weihao; Zeng, Bixin; Cao, Zili; Zhu, Danfeng; Xu, M.
2018-02-01
Wide-field tissue imaging is usually not capable of resolving tissue microstructure. We present High Spatial Frequency Domain Imaging (HSFDI) - a noncontact imaging modality that spatially maps the tissue microscopic scattering structures over a large field of view. Based on an analytical reflectance model of sub-diffusive light from forward-peaked highly scattering media, HSFDI quantifies the spatially-resolved parameters of the light scattering phase function from the reflectance of structured light modulated at high spatial frequencies. We have demonstrated with ex vivo cancerous tissue to validate the robustness of HSFDI in significant contrast and differentiation of the microstructutral parameters between different types and disease states of tissue.
Framework for computing the spatial coherence effects of polycapillary x-ray optics
Zysk, Adam M.; Schoonover, Robert W.; Xu, Qiaofeng; Anastasio, Mark A.
2012-01-01
Despite the extensive use of polycapillary x-ray optics for focusing and collimating applications, there remains a significant need for characterization of the coherence properties of the output wavefield. In this work, we present the first quantitative computational method for calculation of the spatial coherence effects of polycapillary x-ray optical devices. This method employs the coherent mode decomposition of an extended x-ray source, geometric optical propagation of individual wavefield modes through a polycapillary device, output wavefield calculation by ray data resampling onto a uniform grid, and the calculation of spatial coherence properties by way of the spectral degree of coherence. PMID:22418154
Landsat analysis of tropical forest succession employing a terrain model
NASA Technical Reports Server (NTRS)
Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.
1980-01-01
Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.
Aging and the Effects of Exploratory Behavior on Spatial Memory.
Varner, Kaitlin M; Dopkins, Stephen; Philbeck, John W
2016-03-01
The present research examined the effect of encoding from multiple viewpoints on scene recall in a group of younger (18-22 years) and older (65-80 years) adults. Participants completed a visual search task, during which they were given the opportunity to examine a room using two sets of windows that partitioned the room differently. Their choice of window set was recorded, to determine whether an association between these choices and spatial memory performance existed. Subsequently, participants were tested for spatial memory of the domain in which the search task was completed. Relative to younger adults, older adults demonstrated an increased tendency to use a single set of windows as well as decreased spatial memory for the domain. Window-set usage was associated with spatial memory, such that older adults who relied more heavily on a single set of windows also had better performance on the spatial memory task. These findings suggest that, in older adults, moderation in exploratory behavior may have a positive effect on memory for the domain of exploration. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
A multi-domain spectral method for time-fractional differential equations
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.
2015-07-01
This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.
Jammed Limit of Bijel Structure Formation
Welch, P. M.; Lee, M. N.; Parra-Vasquez, A. N. G.; ...
2017-11-02
Over the past decade, methods to control microstructure in heterogeneous mixtures by arresting spinodal decomposition via the addition of colloidal particles have led to an entirely new class of bicontinuous materials known as bijels. We present a new model for the development of these materials that yields to both numerical and analytical evaluation. This model reveals that a single dimensionless parameter that captures both chemical and environmental variables dictates the dynamics and ultimate structure formed in bijels. We also demonstrate that this parameter must fall within a fixed range in order for jamming to occur during spinodal decomposition, as wellmore » as show that known experimental trends for the characteristic domain sizes and time scales for formation are recovered by this model.« less
Subspace-based interference removal methods for a multichannel biomagnetic sensor array.
Sekihara, Kensuke; Nagarajan, Srikantan S
2017-10-01
In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Subspace-based interference removal methods for a multichannel biomagnetic sensor array
NASA Astrophysics Data System (ADS)
Sekihara, Kensuke; Nagarajan, Srikantan S.
2017-10-01
Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.
Using home buyers' revealed preferences to define the urban rural fringe
NASA Astrophysics Data System (ADS)
Lesage, James P.; Charles, Joni S.
2008-03-01
The location of new homes defines the urban rural fringe and determines many facets of the urban rural interaction set in motion by construction of new homes in previously rural areas. Home, neighborhood and school district characteristics play a crucial role in determining the spatial location of new residential construction, which in turn defines the boundary and spatial extent of the urban rural fringe. We develop and apply a spatial hedonic variant of the Blinder (J Hum Resour 8:436 455, 1973) and Oaxaca (Int Econ Rev 9:693 709, 1973) price decomposition to newer versus older home sales in the Columbus, Ohio metropolitan area during the year 2000. The preferences of buyers of newer homes are compared to those who purchased the nearest neighboring older home located in the same census block group, during the same year. Use of the nearest older home purchased in the same location represents a methodology to control for various neighborhood, social economic-demographic and school district characteristics that influence home prices. Since newer homes reflect current preferences for home characteristics while older homes reflect past preferences for these characteristics, we use the price differentials between newer and older home sales in the Blinder Oaxaca decomposition to assess the relative significance of various house characteristics to home buyers.
Investigation of coherent structures in a superheated jet using decomposition methods
NASA Astrophysics Data System (ADS)
Sinha, Avick; Gopalakrishnan, Shivasubramanian; Balasubramanian, Sridhar
2016-11-01
A superheated turbulent jet, commonly encountered in many engineering flows, is complex two phase mixture of liquid and vapor. The superposition of temporally and spatially evolving coherent vortical motions, known as coherent structures (CS), govern the dynamics of such a jet. Both POD and DMD are employed to analyze such vortical motions. PIV data is used in conjunction with the decomposition methods to analyze the CS in the flow. The experiments were conducted using water emanating into a tank containing homogeneous fluid at ambient condition. Three inlet pressure were employed in the study, all at a fixed inlet temperature. 90% of the total kinetic energy in the mean flow is contained within the first five modes. The scatterplot for any two POD coefficients predominantly showed a circular distribution, representing a strong connection between the two modes. We speculate that the velocity and vorticity contours of spatial POD basis functions show presence of K-H instability in the flow. From DMD, eigenvalues away from the origin is observed for all the cases indicating the presence of a non-oscillatory structure. Spatial structures are also obtained from DMD. The authors are grateful to Confederation of Indian Industry and General Electric India Pvt. Ltd. for partial funding of this project.
What Matters in Education: A Decomposition of Educational Outcomes with Multiple Measures
ERIC Educational Resources Information Center
Li, Jinjing; Miranti, Riyana; Vidyattama, Yogi
2017-01-01
Significant variations in educational outcomes across both the spatial and socioeconomic spectra in Australia have been widely debated by policymakers in recent years. This paper examines these variations and decomposes educational outcomes into 3 major input factors: availability of school resources, socioeconomic background, and a latent factor…
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Edge Triggered Apparatus and Method for Measuring Strain in Bragg Gratings
NASA Technical Reports Server (NTRS)
Froggatt, Mark E. (Inventor)
2003-01-01
An apparatus and method for measuring strain of gratings written into an optical fiber. Optical radiation is transmitted over one or more contiguous predetermined wavelength ranges into a reference optical fiber network and an optical fiber network under test to produce a plurality of reference interference fringes and measurement interference fringes, respectively. The reference and measurement fringes are detected, and the reference fringes trigger the sampling of the measurement fringes. This results in the measurement fringes being sampled at 2(pi) increments of the reference fringes. Each sampled measurement fringe of each wavelength sweep is transformed into a spatial domain waveform. The spatial domain waveforms are summed to form a summation spatial domain waveform that is used to determine location of each grating with respect to a reference reflector. A portion of each spatial domain waveform that corresponds to a particular grating is determined and transformed into a corresponding frequency spectrum representation. The strain on the grating at each wavelength of optical radiation is determined by determining the difference between the current wavelength and an earlier, zero-strain wavelength measurement.
SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, R; Carson, J
2014-06-15
Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6)more » or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.« less
López-Mondéjar, Rubén; Zühlke, Daniela; Becher, Dörte; Riedel, Katharina; Baldrian, Petr
2016-01-01
Evidence shows that bacteria contribute actively to the decomposition of cellulose and hemicellulose in forest soil; however, their role in this process is still unclear. Here we performed the screening and identification of bacteria showing potential cellulolytic activity from litter and organic soil of a temperate oak forest. The genomes of three cellulolytic isolates previously described as abundant in this ecosystem were sequenced and their proteomes were characterized during the growth on plant biomass and on microcrystalline cellulose. Pedobacter and Mucilaginibacter showed complex enzymatic systems containing highly diverse carbohydrate-active enzymes for the degradation of cellulose and hemicellulose, which were functionally redundant for endoglucanases, β-glucosidases, endoxylanases, β-xylosidases, mannosidases and carbohydrate-binding modules. Luteibacter did not express any glycosyl hydrolases traditionally recognized as cellulases. Instead, cellulose decomposition was likely performed by an expressed GH23 family protein containing a cellulose-binding domain. Interestingly, the presence of plant lignocellulose as well as crystalline cellulose both trigger the production of a wide set of hydrolytic proteins including cellulases, hemicellulases and other glycosyl hydrolases. Our findings highlight the extensive and unexplored structural diversity of enzymatic systems in cellulolytic soil bacteria and indicate the roles of multiple abundant bacterial taxa in the decomposition of cellulose and other plant polysaccharides. PMID:27125755
High performance Python for direct numerical simulations of turbulent flows
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter
2016-06-01
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Pi2 detection using Empirical Mode Decomposition (EMD)
NASA Astrophysics Data System (ADS)
Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz
2017-04-01
Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.
Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick
2018-01-01
Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Hou, Tingjun; Zhang, Wei; Case, David A; Wang, Wei
2008-02-29
Many important protein-protein interactions are mediated by peptide recognition modular domains, such as the Src homology 3 (SH3), SH2, PDZ, and WW domains. Characterizing the interaction interface of domain-peptide complexes and predicting binding specificity for modular domains are critical for deciphering protein-protein interaction networks. Here, we propose the use of an energetic decomposition analysis to characterize domain-peptide interactions and the molecular interaction energy components (MIECs), including van der Waals, electrostatic, and desolvation energy between residue pairs on the binding interface. We show a proof-of-concept study on the amphiphysin-1 SH3 domain interacting with its peptide ligands. The structures of the human amphiphysin-1 SH3 domain complexed with 884 peptides were first modeled using virtual mutagenesis and optimized by molecular mechanics (MM) minimization. Next, the MIECs between domain and peptide residues were computed using the MM/generalized Born decomposition analysis. We conducted two types of statistical analyses on the MIECs to demonstrate their usefulness for predicting binding affinities of peptides and for classifying peptides into binder and non-binder categories. First, combining partial least squares analysis and genetic algorithm, we fitted linear regression models between the MIECs and the peptide binding affinities on the training data set. These models were then used to predict binding affinities for peptides in the test data set; the predicted values have a correlation coefficient of 0.81 and an unsigned mean error of 0.39 compared with the experimentally measured ones. The partial least squares-genetic algorithm analysis on the MIECs revealed the critical interactions for the binding specificity of the amphiphysin-1 SH3 domain. Next, a support vector machine (SVM) was employed to build classification models based on the MIECs of peptides in the training set. A rigorous training-validation procedure was used to assess the performances of different kernel functions in SVM and different combinations of the MIECs. The best SVM classifier gave satisfactory predictions for the test set, indicated by average prediction accuracy rates of 78% and 91% for the binding and non-binding peptides, respectively. We also showed that the performance of our approach on both binding affinity prediction and binder/non-binder classification was superior to the performances of the conventional MM/Poisson-Boltzmann solvent-accessible surface area and MM/generalized Born solvent-accessible surface area calculations. Our study demonstrates that the analysis of the MIECs between peptides and the SH3 domain can successfully characterize the binding interface, and it provides a framework to derive integrated prediction models for different domain-peptide systems.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
How Does the Sparse Memory “Engram” Neurons Encode the Memory of a Spatial–Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns. PMID:27601979
Selective growth of Pb islands on graphene/SiC buffer layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X. T.; Miao, Y. P.; Ma, D. Y.
2015-02-14
Graphene is fabricated by thermal decomposition of silicon carbide (SiC) and Pb islands are deposited by Pb flux in molecular beam epitaxy chamber. It is found that graphene domains and SiC buffer layer coexist. Selective growth of Pb islands on SiC buffer layer rather than on graphene domains is observed. It can be ascribed to the higher adsorption energy of Pb atoms on the 6√(3) reconstruction of SiC. However, once Pb islands nucleate on graphene domains, they will grow very large owing to the lower diffusion barrier of Pb atoms on graphene. The results are consistent with first-principle calculations. Sincemore » Pb atoms on graphene are nearly free-standing, Pb islands grow in even-number mode.« less
Nested ocean models: Work in progress
NASA Technical Reports Server (NTRS)
Perkins, A. Louise
1991-01-01
The ongoing work of combining three existing software programs into a nested grid oceanography model is detailed. The HYPER domain decomposition program, the SPEM ocean modeling program, and a quasi-geostrophic model written in England are being combined into a general ocean modeling facility. This facility will be used to test the viability and the capability of two-way nested grids in the North Atlantic.
Box schemes and their implementation on the iPSC/860
NASA Technical Reports Server (NTRS)
Chattot, J. J.; Merriam, M. L.
1991-01-01
Research on algoriths for efficiently solving fluid flow problems on massively parallel computers is continued in the present paper. Attention is given to the implementation of a box scheme on the iPSC/860, a massively parallel computer with a peak speed of 10 Gflops and a memory of 128 Mwords. A domain decomposition approach to parallelism is used.
Enhanced visualization of abnormalities in digital-mammographic images
NASA Astrophysics Data System (ADS)
Young, Susan S.; Moore, William E.
2002-05-01
This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.
Spors, Sascha; Buchner, Herbert; Rabenstein, Rudolf; Herbordt, Wolfgang
2007-07-01
The acoustic theory for multichannel sound reproduction systems usually assumes free-field conditions for the listening environment. However, their performance in real-world listening environments may be impaired by reflections at the walls. This impairment can be reduced by suitable compensation measures. For systems with many channels, active compensation is an option, since the compensating waves can be created by the reproduction loudspeakers. Due to the time-varying nature of room acoustics, the compensation signals have to be determined by an adaptive system. The problems associated with the successful operation of multichannel adaptive systems are addressed in this contribution. First, a method for decoupling the adaptation problem is introduced. It is based on a generalized singular value decomposition and is called eigenspace adaptive filtering. Unfortunately, it cannot be implemented in its pure form, since the continuous adaptation of the generalized singular value decomposition matrices to the variable room acoustics is numerically very demanding. However, a combination of this mathematical technique with the physical description of wave propagation yields a realizable multichannel adaptation method with good decoupling properties. It is called wave domain adaptive filtering and is discussed here in the context of wave field synthesis.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Lesoinne, Michel
1993-01-01
Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
NASA Astrophysics Data System (ADS)
Mieloszyk, M.; Opoka, S.; Ostachowicz, W.
2015-07-01
This paper presents an application of Fibre Bragg Grating (FBG) sensors for Structural Health Monitoring (SHM) of offshore wind energy support structure model. The analysed structure is a tripod equipped with 16 FBG sensors. From a wide variety of Operational Modal Analysis (OMA) methods Frequency Domain Decomposition (FDD) technique is used in this paper under assumption that the input loading is similar to a white noise excitation. The FDD method can be applied using different sets of sensors, i.e. the one which contains all FBG sensors and the other set of sensors localised only on a particular tripod's leg. The cases considered during investigation were as follows: damaged and undamaged scenarios, different support conditions. The damage was simulated as an dismantled flange on an upper brace in one of the tripod legs. First the model was fixed to an antishaker table and investigated in the air under impulse excitations. Next the tripod was submerged into water basin in order to check the quality of the measurement set-up in different environmental condition. In this case the model was excited by regular waves.
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Method and apparatus for wavefront sensing
Bahk, Seung-Whan
2016-08-23
A method of measuring characteristics of a wavefront of an incident beam includes obtaining an interferogram associated with the incident beam passing through a transmission mask and Fourier transforming the interferogram to provide a frequency domain interferogram. The method also includes selecting a subset of harmonics from the frequency domain interferogram, individually inverse Fourier transforming each of the subset of harmonics to provide a set of spatial domain harmonics, and extracting a phase profile from each of the set of spatial domain harmonics. The method further includes removing phase discontinuities in the phase profile, rotating the phase profile, and reconstructing a phase front of the wavefront of the incident beam.
Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia
2017-04-01
It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.
NASA Astrophysics Data System (ADS)
Lin, Yu-Cheng; Lin, Yu-Hsuan; Lo, Men-Tzung; Peng, Chung-Kang; Huang, Norden E.; Yang, Cheryl C. H.; Kuo, Terry B. J.
2016-02-01
The complex fluctuations in heart rate variability (HRV) reflect cardiac autonomic modulation and are an indicator of congestive heart failure (CHF). This paper proposes a novel nonlinear approach to HRV investigation, the multi dynamic trend analysis (MDTA) method, based on the empirical mode decomposition algorithm of the Hilbert-Huang transform combined with a variable-sized sliding-window method. Electrocardiographic signal data obtained from the PhysioNet database were used. These data were from subjects with CHF (mean age = 59.4 ± 8.4), an age-matched elderly healthy control group (59.3 ± 10.6), and a healthy young group (30.3 ± 4.8); the HRVs of these subjects were processed using the MDTA method, time domain analysis, and frequency domain analysis. Among all HRV parameters, the MDTA absolute value slope (MDTS) and MDTA deviation (MDTD) exhibited the greatest area under the curve (AUC) of the receiver operating characteristics in distinguishing between the CHF group and the healthy controls (AUC = 1.000) and between the healthy elderly subject group and the young subject group (AUC = 0.834 ± 0.067 for MDTS; 0.837 ± 0.066 for MDTD). The CHF subjects presented with lower MDTA indices than those of the healthy elderly subject group. Furthermore, the healthy elderly subjects exhibited lower MDTA indices than those of the young controls. The MDTA method can adaptively and automatically identify the intrinsic fluctuation on variable temporal and spatial scales when investigating complex fluctuations in the cardiac autonomic regulation effects of aging and CHF.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less
Airbreathing Propulsion System Analysis Using Multithreaded Parallel Processing
NASA Technical Reports Server (NTRS)
Schunk, Richard Gregory; Chung, T. J.; Rodriguez, Pete (Technical Monitor)
2000-01-01
In this paper, parallel processing is used to analyze the mixing, and combustion behavior of hypersonic flow. Preliminary work for a sonic transverse hydrogen jet injected from a slot into a Mach 4 airstream in a two-dimensional duct combustor has been completed [Moon and Chung, 1996]. Our aim is to extend this work to three-dimensional domain using multithreaded domain decomposition parallel processing based on the flowfield-dependent variation theory. Numerical simulations of chemically reacting flows are difficult because of the strong interactions between the turbulent hydrodynamic and chemical processes. The algorithm must provide an accurate representation of the flowfield, since unphysical flowfield calculations will lead to the faulty loss or creation of species mass fraction, or even premature ignition, which in turn alters the flowfield information. Another difficulty arises from the disparity in time scales between the flowfield and chemical reactions, which may require the use of finite rate chemistry. The situations are more complex when there is a disparity in length scales involved in turbulence. In order to cope with these complicated physical phenomena, it is our plan to utilize the flowfield-dependent variation theory mentioned above, facilitated by large eddy simulation. Undoubtedly, the proposed computation requires the most sophisticated computational strategies. The multithreaded domain decomposition parallel processing will be necessary in order to reduce both computational time and storage. Without special treatments involved in computer engineering, our attempt to analyze the airbreathing combustion appears to be difficult, if not impossible.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kutler, Paul (Technical Monitor)
1998-01-01
Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.
Theory and experimental evidence of phonon domains and their roles in pre-martensitic phenomena
NASA Astrophysics Data System (ADS)
Jin, Yongmei M.; Wang, Yu U.; Ren, Yang
2015-12-01
Pre-martensitic phenomena, also called martensite precursor effects, have been known for decades while yet remain outstanding issues. This paper addresses pre-martensitic phenomena from new theoretical and experimental perspectives. A statistical mechanics-based Grüneisen-type phonon theory is developed. On the basis of deformation-dependent incompletely softened low-energy phonons, the theory predicts a lattice instability and pre-martensitic transition into elastic-phonon domains via 'phonon spinodal decomposition.' The phase transition lifts phonon degeneracy in cubic crystal and has a nature of phonon pseudo-Jahn-Teller lattice instability. The theory and notion of phonon domains consistently explain the ubiquitous pre-martensitic anomalies as natural consequences of incomplete phonon softening. The phonon domains are characterised by broken dynamic symmetry of lattice vibrations and deform through internal phonon relaxation in response to stress (a particular case of Le Chatelier's principle), leading to previously unexplored new domain phenomenon. Experimental evidence of phonon domains is obtained by in situ three-dimensional phonon diffuse scattering and Bragg reflection using high-energy synchrotron X-ray single-crystal diffraction, which observes exotic domain phenomenon fundamentally different from usual ferroelastic domain switching phenomenon. In light of the theory and experimental evidence of phonon domains and their roles in pre-martensitic phenomena, currently existing alternative opinions on martensitic precursor phenomena are revisited.
NASA Astrophysics Data System (ADS)
Cork, Christopher; Miloslavsky, Alexander; Friedberg, Paul; Luk-Pat, Gerry
2013-04-01
Lithographers had hoped that single patterning would be enabled at the 20nm node by way of EUV lithography. However, due to delays in EUV readiness, double patterning with 193i lithography is currently relied upon for volume production for the 20nm node's metal 1 layer. At the 14nm and likely at the 10nm node, LE-LE-LE triple patterning technology (TPT) is one of the favored options [1,2] for patterning local interconnect and Metal 1 layers. While previous research has focused on TPT for contact mask, metal layers offer new challenges and opportunities, in particular the ability to decompose design polygons across more than one mask. The extra flexibility offered by the third mask and ability to leverage polygon stitching both serve to improve compliance. However, ensuring TPT compliance - the task of finding a 3-color mask decomposition for a design - is still a difficult task. Moreover, scalability concerns multiply the difficulty of triple patterning decomposition which is an NP-complete problem. Indeed previous work shows that network sizes above a few thousand nodes or polygons start to take significantly longer times to compute [3], making full chip decomposition for arbitrary layouts impractical. In practice Metal 1 layouts can be considered as two separate problem domains, namely: decomposition of standard cells and decomposition of IP blocks. Standard cells typically include only a few 10's of polygons and should be amenable to fast decomposition. Successive design iterations should resolve compliance issues and improve packing density. Density improvements are multiplied repeatedly as standard cells are placed multiple times. IP blocks, on the other hand, may involve very large networks. This paper evaluates multiple approaches to triple patterning decomposition for the Metal 1 layer. The benefits of polygon stitching, in particular, the ability to resolve commonly encountered non-compliant layout configurations and improve packing density, are weighed against the increased difficulty in finding an optimized, legal decomposition and coping with the increased scalability challenges.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
Time Frequency Analysis and Spatial Filtering in the Evaluation of Beta ERS After Finger Movement
2001-10-25
Italy. 5IRCCS Fondazione Santa Lucia , via Ardeatina 306, Roma, Italy Fig. 1 Scheme of the Wavelet Packet decomposition. The gray boxes represent...surface splines. J. Aircraft, 1972, 9: 189-191. [8]Maceri, B., Magnone, S., Bianchi, A., Cerutti, S. Studio della decomposizione wavelet dei segnali
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
NASA Astrophysics Data System (ADS)
Smilowitz, L.; Henson, B. F.; Romero, J. J.; Asay, B. W.; Saunders, A.; Merrill, F. E.; Morris, C. L.; Kwiatkowski, K.; Grim, G.; Mariam, F.; Schwartz, C. L.; Hogan, G.; Nedrow, P.; Murray, M. M.; Thompson, T. N.; Espinoza, C.; Lewis, D.; Bainbridge, J.; McNeil, W.; Rightley, P.; Marr-Lyon, M.
2012-05-01
We report proton transmission images obtained during direct heating of a sample of PBX 9501 (a plastic bonded formulation of the explosive nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX)) prior to the ignition of a thermal explosion. We describe the application of proton radiography using the 800 MeV proton accelerator at Los Alamos National Laboratory to obtain transmission images in these thermal explosion experiments. We have obtained images at two spatial magnifications and viewing both the radial and the transverse axes of a solid cylindrical sample encased in aluminum. During heating we observe the slow evolution of proton transmission through the samples, with particular detail during material flow associated with the HMX β-δ phase transition. We also directly observe the loss of solid density to decomposition associated with elevated temperatures in the volume defining the ignition location in these experiments. We measure a diameter associated with this volume of 1-2 mm, in agreement with previous estimations of the diameter using spatially resolved fast thermocouples.
NASA Astrophysics Data System (ADS)
Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi
2018-03-01
This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.
Mellet, E; Jobard, G; Zago, L; Crivello, F; Petit, L; Joliot, M; Mazoyer, B; Tzourio-Mazoyer, N
2014-01-01
The relationship between manual laterality and cognitive skills remains highly controversial. Some studies have reported that strongly lateralised participants had higher cognitive performance in verbal and visuo-spatial domains compared to non-lateralised participants; however, others found the opposite. Moreover, some have suggested that familial sinistrality and sex might interact with individual laterality factors to alter cognitive skills. The present study addressed these issues in 237 right-handed and 199 left-handed individuals. Performance tests covered various aspects of verbal and spatial cognition. A principal component analysis yielded two verbal and one spatial factor scores. Participant laterality assessments included handedness, manual preference strength, asymmetry of motor performance, and familial sinistrality. Age, sex, education level, and brain volume were also considered. No effect of handedness was found, but the mean factor scores in verbal and spatial domains increased with right asymmetry in motor performance. Performance was reduced in participants with a familial history of left-handedness combined with a non-maximal preference strength in the dominant hand. These results elucidated some discrepancies among previous findings in laterality factors and cognitive skills. Laterality factors had small effects compared to the adverse effects of age for spatial cognition and verbal memory, the positive effects of education for all three domains, and the effect of sex for spatial cognition.
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
NASA Astrophysics Data System (ADS)
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
GPR random noise reduction using BPD and EMD
NASA Astrophysics Data System (ADS)
Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz
2018-04-01
Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.
Spontaneous Polariton Currents in Periodic Lateral Chains.
Nalitov, A V; Liew, T C H; Kavokin, A V; Altshuler, B L; Rubo, Y G
2017-08-11
We predict spontaneous generation of superfluid polariton currents in planar microcavities with lateral periodic modulation of both the potential and decay rate. A spontaneous breaking of spatial inversion symmetry of a polariton condensate emerges at a critical pumping, and the current direction is stochastically chosen. We analyze the stability of the current with respect to the fluctuations of the condensate. A peculiar spatial current domain structure emerges, where the current direction is switched at the domain walls, and the characteristic domain size and lifetime scale with the pumping power.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Formation of metastable phases by spinodal decomposition
Alert, Ricard; Tierno, Pietro; Casademunt, Jaume
2016-01-01
Metastable phases may be spontaneously formed from other metastable phases through nucleation. Here we demonstrate the spontaneous formation of a metastable phase from an unstable equilibrium by spinodal decomposition, which leads to a transient coexistence of stable and metastable phases. This phenomenon is generic within the recently introduced scenario of the landscape-inversion phase transitions, which we experimentally realize as a structural transition in a colloidal crystal. This transition exhibits a rich repertoire of new phase-ordering phenomena, including the coexistence of two equilibrium phases connected by two physically different interfaces. In addition, this scenario enables the control of sizes and lifetimes of metastable domains. Our findings open a new setting that broadens the fundamental understanding of phase-ordering kinetics, and yield new prospects of applications in materials science. PMID:27713406
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
Electrophysiological Evidence for Domain-General Processes in Task-Switching
Capizzi, Mariagrazia; Ambrosini, Ettore; Arbula, Sandra; Mazzonetto, Ilaria; Vallesi, Antonino
2016-01-01
The ability to flexibly switch between tasks is a hallmark of cognitive control. Despite previous studies that have investigated whether different task-switching types would be mediated by distinct or overlapping neural mechanisms, no definitive consensus has been reached on this question yet. Here, we aimed at directly addressing this issue by recording the event-related potentials (ERPs) elicited by two types of task-switching occurring in the context of spatial and verbal cognitive domains. Source analysis was also applied to the ERP data in order to track the spatial dynamics of brain activity underlying task-switching abilities. In separate blocks of trials, participants had to perform either spatial or verbal switching tasks both of which employed the same type of stimuli. The ERP analysis, which was carried out through a channel- and time-uninformed mass univariate approach, showed no significant differences between the spatial and verbal domains in the modulation of switch and repeat trials. Specifically, relative to repeat trials, switch trials in both domains were associated with a first larger positivity developing over left parieto-occipital electrodes and with a subsequent larger negativity distributed over mid-left fronto-central sites. The source analysis reconstruction for the two ERP components complemented these findings by highlighting the involvement of left-lateralized prefrontal areas in task-switching. Overall, our results join and extend recent research confirming the existence of left-lateralized domain-general task-switching processes. PMID:27047366
Analysis and visualization of single-trial event-related potentials
NASA Technical Reports Server (NTRS)
Jung, T. P.; Makeig, S.; Westerfield, M.; Townsend, J.; Courchesne, E.; Sejnowski, T. J.
2001-01-01
In this study, a linear decomposition technique, independent component analysis (ICA), is applied to single-trial multichannel EEG data from event-related potential (ERP) experiments. Spatial filters derived by ICA blindly separate the input data into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain sources. Both the data and their decomposition are displayed using a new visualization tool, the "ERP image," that can clearly characterize single-trial variations in the amplitudes and latencies of evoked responses, particularly when sorted by a relevant behavioral or physiological variable. These tools were used to analyze data from a visual selective attention experiment on 28 control subjects plus 22 neurological patients whose EEG records were heavily contaminated with blink and other eye-movement artifacts. Results show that ICA can separate artifactual, stimulus-locked, response-locked, and non-event-related background EEG activities into separate components, a taxonomy not obtained from conventional signal averaging approaches. This method allows: (1) removal of pervasive artifacts of all types from single-trial EEG records, (2) identification and segregation of stimulus- and response-locked EEG components, (3) examination of differences in single-trial responses, and (4) separation of temporally distinct but spatially overlapping EEG oscillatory activities with distinct relationships to task events. The proposed methods also allow the interaction between ERPs and the ongoing EEG to be investigated directly. We studied the between-subject component stability of ICA decomposition of single-trial EEG epochs by clustering components with similar scalp maps and activation power spectra. Components accounting for blinks, eye movements, temporal muscle activity, event-related potentials, and event-modulated alpha activities were largely replicated across subjects. Applying ICA and ERP image visualization to the analysis of sets of single trials from event-related EEG (or MEG) experiments can increase the information available from ERP (or ERF) data. Copyright 2001 Wiley-Liss, Inc.
Time-Domain Filtering for Spatial Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. David
1997-01-01
An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.
NASA Astrophysics Data System (ADS)
Davarpanah, A.; Babaie, H. A.
2012-12-01
The interaction of the thermally induced stress field of the Yellowstone hotspot (YHS) with existing Basin and Range (BR) fault blocks, over the past 17 m.y., has produced a new, spatially and temporally variable system of normal faults around the Snake River Plain (SRP) in Idaho and Wyoming-Montana area. Data about the trace of these new cross faults (CF) and older BR normal faults were acquired from a combination of satellite imageries, DEM, and USGS geological maps and databases at scales of 1:24,000, 1:100,000, 1:250,000, 1:1000, 000, and 1:2,500, 000, and classified based on their azimuth in ArcGIS 10. The box-counting fractal dimension (Db) of the BR fault traces, determined applying the Benoit software, and the anisotropy intensity (ellipticity) of the fractal dimensions, measured with the modified Cantor dust method applying the AMOCADO software, were measured in two large spatial domains (I and II). The Db and anisotropy of the cross faults were studied in five temporal domains (T1-T5) classified based on the geologic age of successive eruptive centers (12 Ma to recent) of the YHS along the eastern SRP. The fractal anisotropy of the CF system in each temporal domain was also spatially determined in the southern part (domain S1), central part (domain S2), and northern part (domain S3) of the SRP. Line (fault trace) density maps for the BR and CF polylines reveal a higher linear density (trace length per unit area) for the BR traces in the spatial domain I, and a higher linear density of the CF traces around the present Yellowstone National Park (S1T5) where most of the seismically active faults are located. Our spatio-temporal analysis reveals that the fractal dimension of the BR system in domain I (Db=1.423) is greater than that in domain II (Db=1.307). It also shows that the anisotropy of the fractal dimension in domain I is less eccentric (axial ratio: 1.242) than that in domain II (1.355), probably reflecting the greater variation in the trend of the BR system in domain I. The CF system in the S1T5 domain has the highest fractal dimension (Db=1.37) and the lowest anisotropy eccentricity (1.23) among the five temporal domains. These values positively correlate with the observed maxima on the fault trace density maps. The major axis of the anisotropy ellipses is consistently perpendicular to the average trend of the normal fault system in each domain, and therefore approximates the orientation of extension for normal faulting in each domain. This fact gives a NE-SW and NW-SE extension direction for the BR system in domains I and II, respectively. The observed NE-SW orientation of the major axes of the anisotropy ellipses in the youngest T4 and T5 temporal domains, oriented perpendicular to the mean trend of the normal faults in the these domains, suggests extension along the NE-SW direction for cross faulting in these areas. The spatial trajectories (form lines) of the minor axes of the anisotropy ellipses, and the mean trend of fault traces in the T4 and T5 temporal domains, define a large parabolic pattern about the axis of the eastern SRP, with its apex at the Yellowstone plateau.
Two-level Schwartz methods for nonconforming finite elements and discontinuous coefficients
NASA Technical Reports Server (NTRS)
Sarkis, Marcus
1993-01-01
Two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.
Domain decomposition preconditioners for the spectral collocation method
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio; Sacchilandriani, Giovanni
1988-01-01
Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.
Dynamics of phase separation of binary fluids
NASA Technical Reports Server (NTRS)
Ma, Wen-Jong; Maritan, Amos; Banavar, Jayanth R.; Koplik, Joel
1992-01-01
The results of molecular-dynamics studies of surface-tension-dominated spinodal decomposition of initially well-mixed binary fluids in the absence and presence of gravity are presented. The growth exponent for the domain size and the decay exponent of the potential energy of interaction between the two species with time are found to be 0.6 +/- 0.1, inconsistent with scaling arguments based on dimensional analysis.
Nanosensitive optical coherence tomography for the study of changes in static and dynamic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, S; Subhash, H; Leahy, M
2014-07-31
We briefly discuss the principle of image formation in Fourier domain optical coherence tomography (OCT). The theory of a new approach to improve dramatically the sensitivity of conventional OCT is described. The approach is based on spectral encoding of spatial frequency. Information about the spatial structure is directly translated from the Fourier domain to the image domain as different wavelengths, without compromising the accuracy. Axial spatial period profiles of the structure are reconstructed for any volume of interest within the 3D OCT image with nanoscale sensitivity. An example of application of the nanoscale OCT to probe the internal structure ofmore » medico-biological objects, the anterior chamber of an ex vivo rat eye, is demonstrated. (laser biophotonics)« less
Validation of Distributed Soil Moisture: Airborne Polarimetric SAR vs. Ground-based Sensor Networks
NASA Astrophysics Data System (ADS)
Jagdhuber, T.; Kohling, M.; Hajnsek, I.; Montzka, C.; Papathanassiou, K. P.
2012-04-01
The knowledge of spatially distributed soil moisture is highly desirable for an enhanced hydrological modeling in terms of flood prevention and for yield optimization in combination with precision farming. Especially in mid-latitudes, the growing agricultural vegetation results in an increasing soil coverage along the crop cycle. For a remote sensing approach, this vegetation influence has to be separated from the soil contribution within the resolution cell to extract the actual soil moisture. Therefore a hybrid decomposition was developed for estimation of soil moisture under vegetation cover using fully polarimetric SAR data. The novel polarimetric decomposition combines a model-based decomposition, separating the volume component from the ground components, with an eigen-based decomposition of the two ground components into a surface and a dihedral scattering contribution. Hence, this hybrid decomposition, which is based on [1,2], establishes an innovative way to retrieve soil moisture under vegetation. The developed inversion algorithm for soil moisture under vegetation cover is applied on fully polarimetric data of the TERENO campaign, conducted in May and June 2011 for the Rur catchment within the Eifel/Lower Rhine Valley Observatory. The fully polarimetric SAR data were acquired in high spatial resolution (range: 1.92m, azimuth: 0.6m) by DLR's novel F-SAR sensor at L-band. The inverted soil moisture product from the airborne SAR data is validated with corresponding distributed ground measurements for a quality assessment of the developed algorithm. The in situ measurements were obtained on the one hand by mobile FDR probes from agricultural fields near the towns of Merzenhausen and Selhausen incorporating different crop types and on the other hand by distributed wireless sensor networks (SoilNet clusters) from a grassland test site (near the town of Rollesbroich) and from a forest stand (within the Wüstebach sub-catchment). Each SoilNet cluster incorporates around 150 wireless measuring devices on a grid of approximately 30ha for distributed soil moisture sensing. Finally, the comparison of both distributed soil moisture products results in a discussion on potentials and limitations for obtaining soil moisture under vegetation cover with high resolution fully polarimetric SAR. [1] S.R. Cloude, Polarisation: applications in remote sensing. Oxford, Oxford University Press, 2010. [2] Jagdhuber, T., Hajnsek, I., Papathanassiou, K.P. and Bronstert, A.: A Hybrid Decomposition for Soil Moisture Estimation under Vegetation Cover Using Polarimetric SAR. Proc. of the 5th International Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry, ESA-ESRIN, Frascati, Italy, January 24-28, 2011, p.1-6.
NASA Astrophysics Data System (ADS)
Pisana, Francesco; Henzler, Thomas; Schönberg, Stefan; Klotz, Ernst; Schmidt, Bernhard; Kachelrieß, Marc
2017-03-01
Dynamic CT perfusion acquisitions are intrinsically high-dose examinations, due to repeated scanning. To keep radiation dose under control, relatively noisy images are acquired. Noise is then further enhanced during the extraction of functional parameters from the post-processing of the time attenuation curves of the voxels (TACs) and normally some smoothing filter needs to be employed to better visualize any perfusion abnormality, but sacrificing spatial resolution. In this study we propose a new method to detect perfusion abnormalities keeping both high spatial resolution and high CNR. To do this we first perform the singular value decomposition (SVD) of the original noisy spatial temporal data matrix to extract basis functions of the TACs. Then we iteratively cluster the voxels based on a smoothed version of the three most significant singular vectors. Finally, we create high spatial resolution 3D volumes where to each voxel is assigned a distance from the centroid of each cluster, showing how functionally similar each voxel is compared to the others. The method was tested on three noisy clinical datasets: one brain perfusion case with an occlusion in the left internal carotid, one healthy brain perfusion case, and one liver case with an enhancing lesion. Our method successfully detected all perfusion abnormalities with higher spatial precision when compared to the functional maps obtained with a commercially available software. We conclude this method might be employed to have a rapid qualitative indication of functional abnormalities in low dose dynamic CT perfusion datasets. The method seems to be very robust with respect to both spatial and temporal noise and does not require any special a priori assumption. While being more robust respect to noise and with higher spatial resolution and CNR when compared to the functional maps, our method is not quantitative and a potential usage in clinical routine could be as a second reader to assist in the maps evaluation, or to guide a dataset smoothing before the modeling part.
Spatial distribution of enzyme activities along the root and in the rhizosphere of different plants
NASA Astrophysics Data System (ADS)
Razavi, Bahar S.; Zarebanadkouki, Mohsen; Blagodatskaya, Evgenia; Kuzyakov, Yakov
2015-04-01
Extracellular enzymes are important for decomposition of many biological macromolecules abundant in soil such as cellulose, hemicelluloses and proteins. Activities of enzymes produced by both plant roots and microbes are the primary biological drivers of organic matter decomposition and nutrient cycling. So far acquisition of in situ data about local activity of different enzymes in soil has been challenged. That is why there is an urgent need in spatially explicit methods such as 2-D zymography to determine the variation of enzymes along the roots in different plants. Here, we developed further the zymography technique in order to quantitatively visualize the enzyme activities (Spohn and Kuzyakov, 2013), with a better spatial resolution We grew Maize (Zea mays L.) and Lentil (Lens culinaris) in rhizoboxes under optimum conditions for 21 days to study spatial distribution of enzyme activity in soil and along roots. We visualized the 2D distribution of the activity of three enzymes:β-glucosidase, leucine amino peptidase and phosphatase, using fluorogenically labelled substrates. Spatial resolution of fluorescent images was improved by direct application of a substrate saturated membrane to the soil-root system. The newly-developed direct zymography shows different pattern of spatial distribution of enzyme activity along roots and soil of different plants. We observed a uniform distribution of enzyme activities along the root system of Lentil. However, root system of Maize demonstrated inhomogeneity of enzyme activities. The apical part of an individual root (root tip) in maize showed the highest activity. The activity of all enzymes was the highest at vicinity of the roots and it decreased towards the bulk soil. Spatial patterns of enzyme activities as a function of distance from the root surface were enzyme specific, with highest extension for phosphatase. We conclude that improved zymography is promising in situ technique to analyze, visualize and quantify spatial distribution of enzyme activities in the rhizosphere hotspots. References Spohn, M., Kuzyakov, Y., 2013. Phosphorus mineralization can be driven by microbial need for carbon. Soil Biology & Biochemistry 61: 69-75
Quantum image median filtering in the spatial domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Liu, Xiande; Xiao, Hong
2018-03-01
Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H; Dolly, S; Zhao, T
Purpose: A prototype reconstruction algorithm that can provide direct electron density (ED) images from single energy CT scans is being currently developed by Siemens Healthcare GmbH. This feature can eliminate the need for kV specific calibration curve for radiation treatemnt planning. An added benefit is that beam-hardening artifacts are also reduced on direct-ED images due to the underlying material decomposition. This study is to quantitatively analyze the reduction of beam-hardening artifacts on direct-ED images and suggest additional clinical usages. Methods: HU and direct-ED images were reconstructed on a head phantom scanned on a Siemens Definition AS CT scanner at fivemore » tube potentials of 70kV, 80kV, 100kV, 120kV and 140kV respectively. From these images, mean, standard deviation (SD), and local NPS were calculated for regions of interest (ROI) of same locations and sizes. A complete analysis of beam-hardening artifact reduction and image quality improvement was conducted. Results: Along with the increase of tube potentials, ROI means and SDs decrease on both HU and direct-ED images. The mean value differences between HU and direct-ED images are up to 8% with absolute value of 2.9. Compared to that on HU images, the SDs are lower on direct-ED images, and the differences are up to 26%. Interestingly, the local NPS calculated from direct-ED images shows consistent values in the low spatial frequency domain for images acquired from all tube potential settings, while varied dramatically on HU images. This also confirms the beam -hardening artifact reduction on ED images. Conclusions: The low SDs on direct-ED images and relative consistent NPS values in the low spatial frequency domain indicate a reduction of beam-hardening artifacts. The direct-ED image has the potential to assist in more accurate organ contouring, and is a better fit for the desired purpose of CT simulations for radiotherapy.« less
Direct Numerical Simulation of Complex Turbulence
NASA Astrophysics Data System (ADS)
Hsieh, Alan
Direct numerical simulations (DNS) of spanwise-rotating turbulent channel flow were conducted. The data base obtained from these DNS simulations were used to investigate the turbulence generation cycle for simple and complex turbulence. For turbulent channel flow, three theoretical models concerning the formation and evolution of sublayer streaks, three-dimensional hairpin vortices and propagating plane waves were validated using visualizations from the present DNS data. The principal orthogonal decomposition (POD) method was used to verify the existence of the propagating plane waves; a new extension of the POD method was derived to demonstrate these plane waves in a spatial channel model. The analyses of coherent structures was extended to complex turbulence and used to determine the proper computational box size for a minimal flow unit (MFU) at Rob < 0.5. Proper realization of Taylor-Gortler vortices in the highly turbulent pressure region was demonstrated to be necessary for acceptably accurate MFU turbulence statistics, which required a minimum spanwise domain length Lz = pi. A dependence of MFU accuracy on Reynolds number was also discovered and MFU models required a larger domain to accurately approximate higher-Reynolds number flows. In addition, the results obtained from the DNS simulations were utilized to evaluate several turbulence closure models for momentum and thermal transport in rotating turbulent channel flow. Four nonlinear eddy viscosity turbulence models were tested and among these, Explicit Algebraic Reynolds Stress Models (EARSM) obtained the Reynolds stress distributions in best agreement with DNS data for rotational flows. The modeled pressure-strain functions of EARSM were shown to have strong influence on the Reynolds stress distributions near the wall. Turbulent heatflux distributions obtained from two explicit algebraic heat flux models consistently displayed increasing disagreement with DNS data with increasing rotation rate. Results were also obtained regarding flow control of fully-developed spatially-evolving turbulent channel flow using phononic subsurface structures. Fluid-structure interaction (FSI) simulations were conducted by attaching phononic structures to the bottom wall of a turbulent channel flow field and reduction of turbulent kinetic energy was observed for different phononic designs.
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
Logo recognition using alpha-rooted phase correlation in the radon transform domain
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2009-08-01
Alpha-rooted phase correlation (ARPC) is a recently-developed variant of classical phase correlation that includes a Fourier domain image enhancement operation. ARPC combines classical phase correlation with alpha-rooting to provide tunable image enhancement. The alpha-rooting parameters may be adjusted to provide a tradeoff between height and width of the ARPC main lobe. A high narrow main lobe peak provides high matching accuracy for aligned images, but reduced matching performance for misaligned logos. A lower, wider peak trades matching accuracy on aligned logos, for improved matching performance on misaligned imagery. Previously, we developed ARPC and used it in the spatial domain for logo recognition as part of an overall automated document analysis problem. However, spatial domain ARPC performance can be sensitive to logo misalignments, including rotational misalignment. In this paper we use ARPC as a match metric in the radon transform domain for logo recognition. In the radon transform domain, rotational misalignments correspond to translations in the radon transform angle parameter. These translations are captured by ARPC, thereby producing rotation-invariant logo matching. In the paper, we first present an overview of ARPC, and then describe the logo matching algorithm. We present numerical performance results demonstrating matching tolerance to rotational misalignments. We demonstrate robustness of the radon transform domain rotation estimation to noise. We present logo verification and recognition performance results using the proposed approach on a public domain logo database. We compare performance results to performance obtained using spatial domain ARPC, and state-of-the-art SURF features, for logos in salt-and-pepper noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan{sup ©}600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution.« less
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei
2016-01-01
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan©600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution. PMID:27147376
Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains
Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John
2015-08-18
This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less
Spatial Harmonic Decomposition as a tool for unsteady flow phenomena analysis
NASA Astrophysics Data System (ADS)
Duparchy, A.; Guillozet, J.; De Colombel, T.; Bornard, L.
2014-03-01
Hydropower is already the largest single renewable electricity source today but its further development will face new deployment constraints such as large-scale projects in emerging economies and the growth of intermittent renewable energy technologies. The potential role of hydropower as a grid stabilizer leads to operating hydro power plants in "off-design" zones. As a result, new methods of analyzing associated unsteady phenomena are needed to improve the design of hydraulic turbines. The key idea of the development is to compute a spatial description of a phenomenon by using a combination from several sensor signals. The spatial harmonic decomposition (SHD) extends the concept of so-called synchronous and asynchronous pulsations by projecting sensor signals on a linearly independent set of a modal scheme. This mathematical approach is very generic as it can be applied on any linear distribution of a scalar quantity defined on a closed curve. After a mathematical description of SHD, this paper will discuss the impact of instrumentation and provide tools to understand SHD signals. Then, as an example of a practical application, SHD is applied on a model test measurement in order to capture and describe dynamic pressure fields. Particularly, the spatial description of the phenomena provides new tools to separate the part of pressure fluctuations that contribute to output power instability or mechanical stresses. The study of the machine stability in partial load operating range in turbine mode or the comparison between the gap pressure field and radial thrust behavior during turbine brake operation are both relevant illustrations of SHD contribution.
Dual domain watermarking for authentication and compression of cultural heritage images.
Zhao, Yang; Campisi, Patrizio; Kundur, Deepa
2004-03-01
This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960–2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60–70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures. PMID:26901763
Development of a Geometric Spatial Visualization Tool
ERIC Educational Resources Information Center
Ganesh, Bibi; Wilhelm, Jennifer; Sherrod, Sonya
2009-01-01
This paper documents the development of the Geometric Spatial Assessment. We detail the development of this instrument which was designed to identify middle school students' strategies and advancement in understanding of four geometric concept domains (geometric spatial visualization, spatial projection, cardinal directions, and periodic patterns)…
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Chengjin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M. H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex. PMID:23405103
O’Halloran, Lydia R.; Borer, Elizabeth T.; Seabloom, Eric W.; MacDougall, Andrew S.; Cleland, Elsa E.; McCulley, Rebecca L.; Hobbie, Sarah; Harpole, W. Stan; DeCrappeo, Nicole M.; Chu, Cheng-Jin; Bakker, Jonathan D.; Davies, Kendi F.; Du, Guozhen; Firn, Jennifer; Hagenah, Nicole; Hofmockel, Kirsten S.; Knops, Johannes M.H.; Li, Wei; Melbourne, Brett A.; Morgan, John W.; Orrock, John L.; Prober, Suzanne M.; Stevens, Carly J.
2013-01-01
Based on regional-scale studies, aboveground production and litter decomposition are thought to positively covary, because they are driven by shared biotic and climatic factors. Until now we have been unable to test whether production and decomposition are generally coupled across climatically dissimilar regions, because we lacked replicated data collected within a single vegetation type across multiple regions, obfuscating the drivers and generality of the association between production and decomposition. Furthermore, our understanding of the relationships between production and decomposition rests heavily on separate meta-analyses of each response, because no studies have simultaneously measured production and the accumulation or decomposition of litter using consistent methods at globally relevant scales. Here, we use a multi-country grassland dataset collected using a standardized protocol to show that live plant biomass (an estimate of aboveground net primary production) and litter disappearance (represented by mass loss of aboveground litter) do not strongly covary. Live biomass and litter disappearance varied at different spatial scales. There was substantial variation in live biomass among continents, sites and plots whereas among continent differences accounted for most of the variation in litter disappearance rates. Although there were strong associations among aboveground biomass, litter disappearance and climatic factors in some regions (e.g. U.S. Great Plains), these relationships were inconsistent within and among the regions represented by this study. These results highlight the importance of replication among regions and continents when characterizing the correlations between ecosystem processes and interpreting their global-scale implications for carbon flux. We must exercise caution in parameterizing litter decomposition and aboveground production in future regional and global carbon models as their relationship is complex.
Decomposition and organic matter quality in continental peatlands: The ghost of permafrost past
Turetsky, M.R.
2004-01-01
Permafrost patterning in boreal peatlands contributes to landscape heterogeneity, as peat plateaus, palsas, and localized permafrost mounds are interspersed among unfrozen bogs and fens. The degradation of localized permafrost in peatlands alters local topography, hydrology, thermal regimes, and plant communities, and creates unique peatland features called "internal lawns." I used laboratory incubations to quantify carbon dioxide (CO 2) production in peat formed under different permafrost regimes (with permafrost, without permafrost, melted permafrost), and explored the relationships among proximate organic matter fractions, nutrient concentrations, and decomposition. Peat within each feature (internal lawn, bog, permafrost mound) is more chemically similar than peat collected within the same province (Alberta, Saskatchewan) or within depth intervals (surface, deep). Internal lawn peat produces more CO2 than the other peatland types. Across peatland features, acid-insoluble material (AIM) and AIM/nitrogen are significant predictors of decomposition. However, within each peatland feature, soluble proximate fractions are better predictors of CO2 production. Permafrost stability in peatlands influences plant and soil environments, which control litter inputs, organic matter quality, and decomposition rates. Spatial patterns of permafrost, as well as ecosystem processes within various permafrost features, should be considered when assessing the fate of soil carbon in northern ecosystems. ?? 2004 Springer Science+Business Media, Inc.
Weininger, Arthur; Weininger, Susan
2015-01-01
The ability to identify the functional correlates of structural and sequence variation in proteins is a critical capability. We related structures of influenza A N10 and N11 proteins that have no established function to structures of proteins with known function by identifying spatially conserved atoms. We identified atoms with common distributed spatial occupancy in PDB structures of N10 protein, N11 protein, an influenza A neuraminidase, an influenza B neuraminidase, and a bacterial neuraminidase. By superposing these spatially conserved atoms, we aligned the structures and associated molecules. We report spatially and sequence invariant residues in the aligned structures. Spatially invariant residues in the N6 and influenza B neuraminidase active sites were found in previously unidentified spatially equivalent sites in the N10 and N11 proteins. We found the corresponding secondary and tertiary structures of the aligned proteins to be largely identical despite significant sequence divergence. We found structural precedent in known non-neuraminidase structures for residues exhibiting structural and sequence divergence in the aligned structures. In N10 protein, we identified staphylococcal enterotoxin I-like domains. In N11 protein, we identified hepatitis E E2S-like domains, SARS spike protein-like domains, and toxin components shared by alpha-bungarotoxin, staphylococcal enterotoxin I, anthrax lethal factor, clostridium botulinum neurotoxin, and clostridium tetanus toxin. The presence of active site components common to the N6, influenza B, and S. pneumoniae neuraminidases in the N10 and N11 proteins, combined with the absence of apparent neuraminidase function, suggests that the role of neuraminidases in H17N10 and H18N11 emerging influenza A viruses may have changed. The presentation of E2S-like, SARS spike protein-like, or toxin-like domains by the N10 and N11 proteins in these emerging viruses may indicate that H17N10 and H18N11 sialidase-facilitated cell entry has been supplemented or replaced by sialidase-independent receptor binding to an expanded cell population that may include neurons and T-cells. PMID:25706124
Aridity and decomposition processes in complex landscapes
NASA Astrophysics Data System (ADS)
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally decreased with increasing aridity with k going from 0.0025 day-1 on equatorial (dry) facing slopes to 0.0040 day-1 on polar (wet) facing slopes. However, differences in temperature as a result of morning vs afternoon sun on east and west aspects, respectively, (not captured in the aridity metric) resulted in poor prediction of decomposition for the sites located in the intermediate aridity range. Overall the results highlight that relatively small differences in microclimate due to slope orientation can have large effects on decomposition. Future research will aim to refine the aridity metric to better resolve small scale variation in surface temperature which is important when up-scaling decomposition processes to landscapes.
Changes in Root Decomposition Rates Across Soil Depths
NASA Astrophysics Data System (ADS)
Hicks Pries, C.; Porras, R. C.; Castanha, C.; Torn, M. S.
2016-12-01
Over half of global soil organic carbon (SOC) is stored in subsurface soils (>30 cm). Turnover times of soil organic carbon (SOC) increases with depth as evidenced by radiocarbon ages of 1,000 to more than 10,000 years in many deep soil horizons but the reasons for this increase are unclear. Many factors that potentially control SOC decomposition change with depth such as increased protection of SOC in aggregates or organo-mineral complexes and increased spatial heterogeneity of SOC "hotspots" like roots, which limit the accessibility of SOC to microbes. Lower concentrations of organic matter at depth may inhibit microbial activity due to energy limitation, and the microbial community itself changes with depth. To investigate how SOC decomposition differs with depth, we inserted a 13C-labeled fine root substrate into three depths (15, 50, and 90 cm) in a coniferous forest Alfisol and measured the root carbon remaining in particulate (>2 mm), bulk (< 2mm), free light, and mineral soil fractions over 2.5 years. We also characterized how the microbial community and SOC changed with depth. Initial rates of decomposition were unaffected by soil depth—50% of root carbon was lost from all depths within the first year. However, after 2.5 years, decomposition rates were affected by soil depth with only 15% of the root carbon remaining at 15 cm while 35% remained at 90 cm. Microbial communities, based on phospholipid fatty acid analysis, changed with depth—fungal biomarkers decreased whereas actinomycetes biomarkers increased. However, the preferences of different microbial groups for the 13C-labeled root carbon were consistent with depth. In contrast, the amount of mineral-associated SOC did not change with depth. Thus, decreased decomposition rates in this deep soil are not due to mineral associations limiting SOC availability, but may instead be due to changes in microbial communities, particularly in the microbes needed to carry out the later stages of root decomposition.
Forest Floor Decomposition Following Hurricane Litter Inputs in Several Puerto Rican Forests
Rebecca Ostertag; Frederick N. Scatena; Whendee L. Silver
2003-01-01
Hurricanes affect ecosystem processes by altering resource availability and heterogeneity, but the spatial and temporal signatures of these events on biomass and nutrient cycling processes are not well understood. We examined mass and nutrient inputs of hurricane-derived litter in six tropical forests spanning three life zones in northeastern Puerto Rico after the...
Yongqiang Liu
2003-01-01
The relations between monthly-seasonal soil moisture and precipitation variability are investigated by identifying the coupled patterns of the two hydrological fields using singular value decomposition (SVD). SVD is a technique of principal component analysis similar to empirical orthogonal knctions (EOF). However, it is applied to two variables simultaneously and is...
Dynamic laser speckle angiography achieved by eigen-decomposition filtering.
Li, Chenxi; Wang, Ruikang
2017-06-01
A new approach is proposed for statistically analysis of laser speckle signals emerged from a living biological tissue based on eigen-decomposition to separate the dynamic speckle signals due to moving blood cells from the static speckle signals due to static tissue components, upon which to achieve angiography of the interrogated tissue in vivo. The proposed approach is tested by imaging mouse ear pinna in vivo, demonstrating its capability of providing detailed microvascular networks with high contrast, and high temporal and spatial resolutions. It is expected to provide further opportunities for laser speckle imaging in the biomedical and clinical applications where microvascular response to certain stimulus or tissue injury is of interest. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hoffman, A N; Krigbaum, A; Ortiz, J B; Mika, A; Hutchinson, K M; Bimonte-Nelson, H A; Conrad, C D
2011-09-01
Chronic stress results in reversible spatial learning impairments in the Morris water maze that correspond with hippocampal CA3 dendritic retraction in male rats. Whether chronic stress impacts different types of memory domains, and whether these can similarly recover, is unknown. This study assessed the effects of chronic stress with and without a post-stress delay to evaluate learning and memory deficits within two memory domains, reference and working memory, in the radial arm water maze (RAWM). Three groups of 5-month-old male Sprague-Dawley rats were either not stressed [control (CON)], or restrained (6 h/day for 21 days) and then tested on the RAWM either on the next day [stress immediate (STR-IMM)] or following a 21-day delay [stress delay (STR-DEL)]. Although the groups learned the RAWM task similarly, groups differed in their 24-h retention trial assessment. Specifically, the STR-IMM group made more errors within both the spatial reference and working memory domains, and these deficits corresponded with a reduction in apical branch points and length of hippocampal CA3 dendrites. In contrast, the STR-DEL group showed significantly fewer errors in both the reference and working memory domains than the STR-IMM group. Moreover, the STR-DEL group showed better RAWM performance in the reference memory domain than did the CON group, and this corresponded with restored CA3 dendritic complexity, revealing long-term enhancing actions of chronic stress. These results indicate that chronic stress-induced spatial working and reference memory impairments, and CA3 dendritic retraction, are reversible, with chronic stress having lasting effects that can benefit spatial reference memory, but with these lasting beneficial effects being independent of CA3 dendritic complexity. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.