Pacanowski, Romain; Salazar Celis, Oliver; Schlick, Christophe; Granier, Xavier; Poulin, Pierre; Cuyt, Annie
2012-11-01
Over the last two decades, much effort has been devoted to accurately measuring Bidirectional Reflectance Distribution Functions (BRDFs) of real-world materials and to use efficiently the resulting data for rendering. Because of their large size, it is difficult to use directly measured BRDFs for real-time applications, and fitting the most sophisticated analytical BRDF models is still a complex task. In this paper, we introduce Rational BRDF, a general-purpose and efficient representation for arbitrary BRDFs, based on Rational Functions (RFs). Using an adapted parametrization, we demonstrate how Rational BRDFs offer 1) a more compact and efficient representation using low-degree RFs, 2) an accurate fitting of measured materials with guaranteed control of the residual error, and 3) efficient importance sampling by applying the same fitting process to determine the inverse of the Cumulative Distribution Function (CDF) generated from the BRDF for use in Monte-Carlo rendering.
NASA Astrophysics Data System (ADS)
Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke
Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...
2015-06-04
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less
2015-01-01
Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956
AN INTEGRAL EQUATION REPRESENTATION OF WIDE-BAND ELECTROMAGNETIC SCATTERING BY THIN SHEETS
An efficient, accurate numerical modeling scheme has been developed, based on the integral equation solution to compute electromagnetic (EM) responses of thin sheets over a wide frequency band. The thin-sheet approach is useful for simulating the EM response of a fracture system ...
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
NASA Astrophysics Data System (ADS)
Barkeshli, Sina
A relatively simple and efficient closed form asymptotic representation of the microstrip dyadic surface Green's function is developed. The large parameter in this asymptotic development is proportional to the lateral separation between the source and field points along the planar microstrip configuration. Surprisingly, this asymptotic solution remains accurate even for very small (almost two tenths of a wavelength) lateral separation of the source and field points. The present asymptotic Green's function will thus allow a very efficient calculation of the currents excited on microstrip antenna patches/feed lines and monolithic millimeter and microwave integrated circuit (MIMIC) elements based on a moment method (MM) solution of an integral equation for these currents. The kernal of the latter integral equation is the present asymptotic form of the microstrip Green's function. It is noted that the conventional Sommerfeld integral representation of the microstrip surface Green's function is very poorly convergent when used in this MM formulation. In addition, an efficient exact steepest descent path integral form employing a radially propagating representation of the microstrip dyadic Green's function is also derived which exhibits a relatively faster convergence when compared to the conventional Sommerfeld integral representation. The same steepest descent form could also be obtained by deforming the integration contour of the conventional Sommerfeld representation; however, the radially propagating integral representation exhibits better convergence properties for laterally separated source and field points even before the steepest descent path of integration is used. Numerical results based on the efficient closed form asymptotic solution for the microstrip surface Green's function developed in this work are presented for the mutual coupling between a pair of dipoles on a single layer grounded dielectric slab. The accuracy of the latter calculations is confirmed by comparison with results based on an exact integral representation for that Green's function.
Representation control increases task efficiency in complex graphical representations.
Moritz, Julia; Meyerhoff, Hauke S; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients.
Representation control increases task efficiency in complex graphical representations
Meyerhoff, Hauke S.; Meyer-Dernbecher, Claudia; Schwan, Stephan
2018-01-01
In complex graphical representations, the relevant information for a specific task is often distributed across multiple spatial locations. In such situations, understanding the representation requires internal transformation processes in order to extract the relevant information. However, digital technology enables observers to alter the spatial arrangement of depicted information and therefore to offload the transformation processes. The objective of this study was to investigate the use of such a representation control (i.e. the users' option to decide how information should be displayed) in order to accomplish an information extraction task in terms of solution time and accuracy. In the representation control condition, the participants were allowed to reorganize the graphical representation and reduce information density. In the control condition, no interactive features were offered. We observed that participants in the representation control condition solved tasks that required reorganization of the maps faster and more accurate than participants without representation control. The present findings demonstrate how processes of cognitive offloading, spatial contiguity, and information coherence interact in knowledge media intended for broad and diverse groups of recipients. PMID:29698443
Wavepacket dynamics and the multi-configurational time-dependent Hartree approach
NASA Astrophysics Data System (ADS)
Manthe, Uwe
2017-06-01
Multi-configurational time-dependent Hartree (MCTDH) based approaches are efficient, accurate, and versatile methods for high-dimensional quantum dynamics simulations. Applications range from detailed investigations of polyatomic reaction processes in the gas phase to high-dimensional simulations studying the dynamics of condensed phase systems described by typical solid state physics model Hamiltonians. The present article presents an overview of the different areas of application and provides a comprehensive review of the underlying theory. The concepts and guiding ideas underlying the MCTDH approach and its multi-mode and multi-layer extensions are discussed in detail. The general structure of the equations of motion is highlighted. The representation of the Hamiltonian and the correlated discrete variable representation (CDVR), which provides an efficient multi-dimensional quadrature in MCTDH calculations, are discussed. Methods which facilitate the calculation of eigenstates, the evaluation of correlation functions, and the efficient representation of thermal ensembles in MCTDH calculations are described. Different schemes for the treatment of indistinguishable particles in MCTDH calculations and recent developments towards a unified multi-layer MCTDH theory for systems including bosons and fermions are discussed.
Infrared small target detection in heavy sky scene clutter based on sparse representation
NASA Astrophysics Data System (ADS)
Liu, Depeng; Li, Zhengzhou; Liu, Bing; Chen, Wenhao; Liu, Tianmei; Cao, Lei
2017-09-01
A novel infrared small target detection method based on sky clutter and target sparse representation is proposed in this paper to cope with the representing uncertainty of clutter and target. The sky scene background clutter is described by fractal random field, and it is perceived and eliminated via the sparse representation on fractal background over-complete dictionary (FBOD). The infrared small target signal is simulated by generalized Gaussian intensity model, and it is expressed by the generalized Gaussian target over-complete dictionary (GGTOD), which could describe small target more efficiently than traditional structured dictionaries. Infrared image is decomposed on the union of FBOD and GGTOD, and the sparse representation energy that target signal and background clutter decomposed on GGTOD differ so distinctly that it is adopted to distinguish target from clutter. Some experiments are induced and the experimental results show that the proposed approach could improve the small target detection performance especially under heavy clutter for background clutter could be efficiently perceived and suppressed by FBOD and the changing target could also be represented accurately by GGTOD.
2002-07-01
Date Kirk A. Mathews (Advisor) James T. Moore (Dean’s Representative) Charles J. Bridgman (Member...Adler-Adler, and Kalbach -Mann representations of the scatter cross sections that are used for some isotopes in ENDF/B-VI are not included. They are not
Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David
2013-05-21
We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.
Spectral properties from Matsubara Green's function approach: Application to molecules
NASA Astrophysics Data System (ADS)
Schüler, M.; Pavlyukh, Y.
2018-03-01
We present results for many-body perturbation theory for the one-body Green's function at finite temperatures using the Matsubara formalism. Our method relies on the accurate representation of the single-particle states in standard Gaussian basis sets, allowing to efficiently compute, among other observables, quasiparticle energies and Dyson orbitals of atoms and molecules. In particular, we challenge the second-order treatment of the Coulomb interaction by benchmarking its accuracy for a well-established test set of small molecules, which includes also systems where the usual Hartree-Fock treatment encounters difficulties. We discuss different schemes how to extract quasiparticle properties and assess their range of applicability. With an accurate solution and compact representation, our method is an ideal starting point to study electron dynamics in time-resolved experiments by the propagation of the Kadanoff-Baym equations.
Efficient free-form surface representation with application in orthodontics
NASA Astrophysics Data System (ADS)
Yamany, Sameh M.; El-Bialy, Ahmed M.
1999-03-01
Orthodontics is the branch of dentistry concerned with the study of growth of the craniofacial complex. The detection and correction of malocclusion and other dental abnormalities is one of the most important and critical phases of orthodontic diagnosis. This paper introduces a system that can assist in automatic orthodontics diagnosis. The system can be used to classify skeletal and dental malocclusion from a limited number of measurements. This system is not intended to deal with several cases but is aimed at cases more likely to be encountered in epidemiological studies. Prior to the measurement of the orthodontics parameters, the position of the teeth in the jaw model must be detected. A new free-form surface representation is adopted for the efficient and accurate segmentation and separation of teeth from a scanned jaw model. THe new representation encodes the curvature and surface normal information into a 2D image. Image segmentation tools are then sued to extract structures of high/low curvature. By iteratively removing these structures, individual teeth surfaces are obtained.
Accurate calculation of the geometric measure of entanglement for multipartite quantum states
NASA Astrophysics Data System (ADS)
Teng, Peiyuan
2017-07-01
This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.
NASA Astrophysics Data System (ADS)
Sakti, Apurba; Gallagher, Kevin G.; Sepulveda, Nestor; Uckun, Canan; Vergara, Claudio; de Sisternes, Fernando J.; Dees, Dennis W.; Botterud, Audun
2017-02-01
We develop three novel enhanced mixed integer-linear representations of the power limit of the battery and its efficiency as a function of the charge and discharge power and the state of charge of the battery, which can be directly implemented in large-scale power systems models and solved with commercial optimization solvers. Using these battery representations, we conduct a techno-economic analysis of the performance of a 10 MWh lithium-ion battery system testing the effect of a 5-min vs. a 60-min price signal on profits using real time prices from a selected node in the MISO electricity market. Results show that models of lithium-ion batteries where the power limits and efficiency are held constant overestimate profits by 10% compared to those obtained from an enhanced representation that more closely matches the real behavior of the battery. When the battery system is exposed to a 5-min price signal, the energy arbitrage profitability improves by 60% compared to that from hourly price exposure. These results indicate that a more accurate representation of li-ion batteries as well as the market rules that govern the frequency of electricity prices can play a major role on the estimation of the value of battery technologies for power grid applications.
Dynamic belief state representations.
Lee, Daniel D; Ortega, Pedro A; Stocker, Alan A
2014-04-01
Perceptual and control systems are tasked with the challenge of accurately and efficiently estimating the dynamic states of objects in the environment. To properly account for uncertainty, it is necessary to maintain a dynamical belief state representation rather than a single state vector. In this review, canonical algorithms for computing and updating belief states in robotic applications are delineated, and connections to biological systems are highlighted. A navigation example is used to illustrate the importance of properly accounting for correlations between belief state components, and to motivate the need for further investigations in psychophysics and neurobiology. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling of scattering from ice surfaces
NASA Astrophysics Data System (ADS)
Dahlberg, Michael Ross
Theoretical research is proposed to study electromagnetic wave scattering from ice surfaces. A mathematical formulation that is more representative of the electromagnetic scattering from ice, with volume mechanisms included, and capable of handling multiple scattering effects is developed. This research is essential to advancing the field of environmental science and engineering by enabling more accurate inversion of remote sensing data. The results of this research contributed towards a more accurate representation of the scattering from ice surfaces, that is computationally more efficient and that can be applied to many remote-sensing applications.
An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet
NASA Technical Reports Server (NTRS)
Gordon, R. A.
1980-01-01
Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.
Cerebellar input configuration toward object model abstraction in manipulation tasks.
Luque, Niceto R; Garrido, Jesus A; Carrillo, Richard R; Coenen, Olivier J-M D; Ros, Eduardo
2011-08-01
It is widely assumed that the cerebellum is one of the main nervous centers involved in correcting and refining planned movement and accounting for disturbances occurring during movement, for instance, due to the manipulation of objects which affect the kinematics and dynamics of the robot-arm plant model. In this brief, we evaluate a way in which a cerebellar-like structure can store a model in the granular and molecular layers. Furthermore, we study how its microstructure and input representations (context labels and sensorimotor signals) can efficiently support model abstraction toward delivering accurate corrective torque values for increasing precision during different-object manipulation. We also describe how the explicit (object-related input labels) and implicit state input representations (sensorimotor signals) complement each other to better handle different models and allow interpolation between two already stored models. This facilitates accurate corrections during manipulations of new objects taking advantage of already stored models.
NASA Astrophysics Data System (ADS)
Salinas, P.; Pavlidis, D.; Xie, Z.; Osman, H.; Pain, C. C.; Jackson, M. D.
2018-01-01
We present a new, high-order, control-volume-finite-element (CVFE) method for multiphase porous media flow with discontinuous 1st-order representation for pressure and discontinuous 2nd-order representation for velocity. The method has been implemented using unstructured tetrahedral meshes to discretize space. The method locally and globally conserves mass. However, unlike conventional CVFE formulations, the method presented here does not require the use of control volumes (CVs) that span the boundaries between domains with differing material properties. We demonstrate that the approach accurately preserves discontinuous saturation changes caused by permeability variations across such boundaries, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than using conventional CVFE methods. We resolve a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media.
Representations and uses of light distribution functions
NASA Astrophysics Data System (ADS)
Lalonde, Paul Albert
1998-11-01
At their lowest level, all rendering algorithms depend on models of local illumination to define the interplay of light with the surfaces being rendered. These models depend both on the representations of light scattering at a surface due to reflection and to an equal extent on the representation of light sources and light fields. Both emission and reflection have in common that they describe how light leaves a surface as a function of direction. Reflection also depends on an incident light direction. Emission can depend on the position on the light source We call the functions representing emission and reflection light distribution functions (LDF's). There are some difficulties to using measured light distribution functions. The data sets are very large-the size of the data grows with the fourth power of the sampling resolution. For example, a bidirectional reflectance distribution function (BRDF) sampled at five degrees angular resolution, which is arguably insufficient to capture highlights and other high frequency effects in the reflection, can easily require one and a half million samples. Once acquired this data requires some form of interpolation to use them. Any compression method used must be efficient, both in space and in the time required to evaluate the function at a point or over a range of points. This dissertation examines a wavelet representation of light distribution functions that addresses these issues. A data structure is presented that allows efficient reconstruction of LDFs for a given set of parameters, making the wavelet representation feasible for rendering tasks. Texture mapping methods that take advantage of our LDF representations are examined, as well as techniques for filtering LDFs, and methods for using wavelet compressed bidirection reflectance distribution functions (BRDFs) and light sources with Monte Carlo path tracing algorithms. The wavelet representation effectively compresses BRDF and emission data while inducing only a small error in the reconstructed signal. The representation can be used to evaluate efficiently some integrals that appear in shading computation which allows fast, accurate computation of local shading. The representation can be used to represent light fields and is used to reconstruct views of environments interactively from a precomputed set of views. The representation of the BRDF also allows the efficient generation of reflected directions for Monte Carlo array tracing applications. The method can be integrated into many different global illumination algorithms, including ray tracers and wavelet radiosity systems.
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
A reflection model for eclipsing binary stars
NASA Technical Reports Server (NTRS)
Wood, D. B.
1973-01-01
A highly accurate reflection model has been developed which emphasizes efficiency of computer calculation. It is assumed that the heating of the irradiated star must depend upon the following properties of the irradiating star: (1) effective temperature; (2) apparent area as seen from a point on the surface of the irradiated star; (3) limb darkening; and (4) zenith distance of the apparent centre as seen from a point on the surface of the irradiated star. The algorithm eliminates the need to integrate over the irradiating star while providing a highly accurate representation of the integrated bolometric flux, even for gravitationally distorted stars.
Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.
2015-01-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981
Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A
2015-11-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
NASA Astrophysics Data System (ADS)
Wang, Han; Zhang, Linfeng; Han, Jiequn; E, Weinan
2018-07-01
Recent developments in many-body potential energy representation via deep learning have brought new hopes to addressing the accuracy-versus-efficiency dilemma in molecular simulations. Here we describe DeePMD-kit, a package written in Python/C++ that has been designed to minimize the effort required to build deep learning based representation of potential energy and force field and to perform molecular dynamics. Potential applications of DeePMD-kit span from finite molecules to extended systems and from metallic systems to chemically bonded systems. DeePMD-kit is interfaced with TensorFlow, one of the most popular deep learning frameworks, making the training process highly automatic and efficient. On the other end, DeePMD-kit is interfaced with high-performance classical molecular dynamics and quantum (path-integral) molecular dynamics packages, i.e., LAMMPS and the i-PI, respectively. Thus, upon training, the potential energy and force field models can be used to perform efficient molecular simulations for different purposes. As an example of the many potential applications of the package, we use DeePMD-kit to learn the interatomic potential energy and forces of a water model using data obtained from density functional theory. We demonstrate that the resulted molecular dynamics model reproduces accurately the structural information contained in the original model.
NASA Technical Reports Server (NTRS)
1986-01-01
Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.
Variable Speed Hydrodynamic Model of an Auv Utilizing Cross Tunnel Thrusters
2017-09-01
Institute NED North East Down NPS Naval Postgraduate School ODE Ordinary Differential Equation PUC Positional Uncertainty REMUS Remote Environmental Measuring ...in its depths. Rising autonomous systems such as the Remote Environmental Measuring Unit (REMUS) 100 vehicle represents not only a feat of...presented account for reduced control surface efficiency at low speeds and build an accurate representation of a REMUS AUV’s behavior while operating at
Karaboga, Arnaud S; Petronin, Florent; Marchetti, Gino; Souchet, Michel; Maigret, Bernard
2013-04-01
Since 3D molecular shape is an important determinant of biological activity, designing accurate 3D molecular representations is still of high interest. Several chemoinformatic approaches have been developed to try to describe accurate molecular shapes. Here, we present a novel 3D molecular description, namely harmonic pharma chemistry coefficient (HPCC), combining a ligand-centric pharmacophoric description projected onto a spherical harmonic based shape of a ligand. The performance of HPCC was evaluated by comparison to the standard ROCS software in a ligand-based virtual screening (VS) approach using the publicly available directory of useful decoys (DUD) data set comprising over 100,000 compounds distributed across 40 protein targets. Our results were analyzed using commonly reported statistics such as the area under the curve (AUC) and normalized sum of logarithms of ranks (NSLR) metrics. Overall, our HPCC 3D method is globally as efficient as the state-of-the-art ROCS software in terms of enrichment and slightly better for more than half of the DUD targets. Since it is largely admitted that VS results depend strongly on the nature of the protein families, we believe that the present HPCC solution is of interest over the current ligand-based VS methods. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
Representational Translation with Concrete Models in Organic Chemistry
ERIC Educational Resources Information Center
Stull, Andrew T.; Hegarty, Mary; Dixon, Bonnie; Stieff, Mike
2012-01-01
In representation-rich domains such as organic chemistry, students must be facile and accurate when translating between different 2D representations, such as diagrams. We hypothesized that translating between organic chemistry diagrams would be more accurate when concrete models were used because difficult mental processes could be augmented by…
Hierarchical Boltzmann simulations and model error estimation
NASA Astrophysics Data System (ADS)
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
NASA Astrophysics Data System (ADS)
Zieliński, Tomasz G.
2017-11-01
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A
2014-12-01
The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.
Multi-representation ability of students on the problem solving physics
NASA Astrophysics Data System (ADS)
Theasy, Y.; Wiyanto; Sujarwata
2018-03-01
Accuracy in representing knowledge possessed by students will show how the level of student understanding. The multi-representation ability of students on the problem solving of physics has been done through qualitative method of grounded theory model and implemented on physics education student of Unnes academic year 2016/2017. Multiforms of representation used are verbal (V), images/diagrams (D), graph (G), and mathematically (M). High and low category students have an accurate use of graphical representation (G) of 83% and 77.78%, and medium category has accurate use of image representation (D) equal to 66%.
On the solution of the Helmholtz equation on regions with corners.
Serkh, Kirill; Rokhlin, Vladimir
2016-08-16
In this paper we solve several boundary value problems for the Helmholtz equation on polygonal domains. We observe that when the problems are formulated as the boundary integral equations of potential theory, the solutions are representable by series of appropriately chosen Bessel functions. In addition to being analytically perspicuous, the resulting expressions lend themselves to the construction of accurate and efficient numerical algorithms. The results are illustrated by a number of numerical examples.
On the solution of the Helmholtz equation on regions with corners
Serkh, Kirill; Rokhlin, Vladimir
2016-01-01
In this paper we solve several boundary value problems for the Helmholtz equation on polygonal domains. We observe that when the problems are formulated as the boundary integral equations of potential theory, the solutions are representable by series of appropriately chosen Bessel functions. In addition to being analytically perspicuous, the resulting expressions lend themselves to the construction of accurate and efficient numerical algorithms. The results are illustrated by a number of numerical examples. PMID:27482110
A Coupled Earthquake-Tsunami Simulation Framework Applied to the Sumatra 2004 Event
NASA Astrophysics Data System (ADS)
Vater, Stefan; Bader, Michael; Behrens, Jörn; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Wollherr, Stephanie; van Zelst, Iris
2017-04-01
Large earthquakes along subduction zone interfaces have generated destructive tsunamis near Chile in 1960, Sumatra in 2004, and northeast Japan in 2011. In order to better understand these extreme events, we have developed tools for physics-based, coupled earthquake-tsunami simulations. This simulation framework is applied to the 2004 Indian Ocean M 9.1-9.3 earthquake and tsunami, a devastating event that resulted in the loss of more than 230,000 lives. The earthquake rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh with the software SeisSol. Advantages of this approach include accurate representation of complex fault and sea floor geometries and a parallelized and efficient workflow in high-performance computing environments. Accurate and efficient representation of the tsunami evolution and inundation at the coast is achieved with an adaptive mesh discretizing the shallow water equations with a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme. With the application of the framework to this historic event, we aim to better understand the involved mechanisms between the dynamic earthquake within the earth's crust, the resulting tsunami wave within the ocean, and the final coastal inundation process. Earthquake model results are constrained by GPS surface displacements and tsunami model results are compared with buoy and inundation data. This research is part of the ASCETE Project, "Advanced Simulation of Coupled Earthquake and Tsunami Events", funded by the Volkswagen Foundation.
Nonnegative Matrix Factorization for Efficient Hyperspectral Image Projection
NASA Technical Reports Server (NTRS)
Iacchetta, Alexander S.; Fienup, James R.; Leisawitz, David T.; Bolcar, Matthew R.
2015-01-01
Hyperspectral imaging for remote sensing has prompted development of hyperspectral image projectors that can be used to characterize hyperspectral imaging cameras and techniques in the lab. One such emerging astronomical hyperspectral imaging technique is wide-field double-Fourier interferometry. NASA's current, state-of-the-art, Wide-field Imaging Interferometry Testbed (WIIT) uses a Calibrated Hyperspectral Image Projector (CHIP) to generate test scenes and provide a more complete understanding of wide-field double-Fourier interferometry. Given enough time, the CHIP is capable of projecting scenes with astronomically realistic spatial and spectral complexity. However, this would require a very lengthy data collection process. For accurate but time-efficient projection of complicated hyperspectral images with the CHIP, the field must be decomposed both spectrally and spatially in a way that provides a favorable trade-off between accurately projecting the hyperspectral image and the time required for data collection. We apply nonnegative matrix factorization (NMF) to decompose hyperspectral astronomical datacubes into eigenspectra and eigenimages that allow time-efficient projection with the CHIP. Included is a brief analysis of NMF parameters that affect accuracy, including the number of eigenspectra and eigenimages used to approximate the hyperspectral image to be projected. For the chosen field, the normalized mean squared synthesis error is under 0.01 with just 8 eigenspectra. NMF of hyperspectral astronomical fields better utilizes the CHIP's capabilities, providing time-efficient and accurate representations of astronomical scenes to be imaged with the WIIT.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
NASA Astrophysics Data System (ADS)
Vanicek, Jiri
2014-03-01
Rigorous quantum-mechanical calculations of coherent ultrafast electronic spectra remain difficult. I will present several approaches developed in our group that increase the efficiency and accuracy of such calculations: First, we justified the feasibility of evaluating time-resolved spectra of large systems by proving that the number of trajectories needed for convergence of the semiclassical dephasing representation/phase averaging is independent of dimensionality. Recently, we further accelerated this approximation with a cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. The accuracy of potential energy surfaces was increased by combining the dephasing representation with accurate on-the-fly ab initio electronic structure calculations, including nonadiabatic and spin-orbit couplings. Finally, the inherent semiclassical approximation was removed in the exact quantum Gaussian dephasing representation, in which semiclassical trajectories are replaced by communicating frozen Gaussian basis functions evolving classically with an average Hamiltonian. Among other examples I will present an on-the-fly ab initio semiclassical dynamics calculation of the dispersed time-resolved stimulated emission spectrum of the 54-dimensional azulene. This research was supported by EPFL and by the Swiss National Science Foundation NCCR MUST (Molecular Ultrafast Science and Technology) and Grant No. 200021124936/1.
deBGR: an efficient and near-exact representation of the weighted de Bruijn graph
Pandey, Prashant; Bender, Michael A.; Johnson, Rob; Patro, Rob
2017-01-01
Abstract Motivation: Almost all de novo short-read genome and transcriptome assemblers start by building a representation of the de Bruijn Graph of the reads they are given as input. Even when other approaches are used for subsequent assembly (e.g. when one is using ‘long read’ technologies like those offered by PacBio or Oxford Nanopore), efficient k-mer processing is still crucial for accurate assembly, and state-of-the-art long-read error-correction methods use de Bruijn Graphs. Because of the centrality of de Bruijn Graphs, researchers have proposed numerous methods for representing de Bruijn Graphs compactly. Some of these proposals sacrifice accuracy to save space. Further, none of these methods store abundance information, i.e. the number of times that each k-mer occurs, which is key in transcriptome assemblers. Results: We present a method for compactly representing the weighted de Bruijn Graph (i.e. with abundance information) with essentially no errors. Our representation yields zero errors while increasing the space requirements by less than 18–28% compared to the approximate de Bruijn graph representation in Squeakr. Our technique is based on a simple invariant that all weighted de Bruijn Graphs must satisfy, and hence is likely to be of general interest and applicable in most weighted de Bruijn Graph-based systems. Availability and implementation: https://github.com/splatlab/debgr. Contact: rob.patro@cs.stonybrook.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881995
Refined Zigzag Theory for Laminated Composite and Sandwich Plates
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2009-01-01
A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.
Visualization of diversity in large multivariate data sets.
Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald
2010-01-01
Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.
Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data
NASA Technical Reports Server (NTRS)
Bose, Tamal
2000-01-01
A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.
Efficient local representations for three-dimensional palmprint recognition
NASA Astrophysics Data System (ADS)
Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua
2013-10-01
Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.
2011-01-01
Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full preservation of data correspondence and provenance. Our representation builds on existing cheminformatics technologies and, by the virtue of RDF specification, remains flexible and amenable to application- and domain-specific annotations without compromising chemical data integration. We conclude that the adoption of a consistent and semantically-enabled chemical specification is imperative for surviving the coming chemical data deluge and supporting systems science research. PMID:21595881
Chepelev, Leonid L; Dumontier, Michel
2011-05-19
Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full preservation of data correspondence and provenance. Our representation builds on existing cheminformatics technologies and, by the virtue of RDF specification, remains flexible and amenable to application- and domain-specific annotations without compromising chemical data integration. We conclude that the adoption of a consistent and semantically-enabled chemical specification is imperative for surviving the coming chemical data deluge and supporting systems science research.
ERIC Educational Resources Information Center
Swan, Denise; Goswami, Usha
1997-01-01
Used picture-naming task to identify accurate/inaccurate phonological representations by dyslexic and control children; compared performance on phonological measures for words with precise/imprecise representations. Found that frequency effects in phonological tasks disappeared after considering representational quality, and that availability of…
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Green's function enriched Poisson solver for electrostatics in many-particle systems
NASA Astrophysics Data System (ADS)
Sutmann, Godehard
2016-06-01
A highly accurate method is presented for the construction of the charge density for the solution of the Poisson equation in particle simulations. The method is based on an operator adjusted source term which can be shown to produce exact results up to numerical precision in the case of a large support of the charge distribution, therefore compensating the discretization error of finite difference schemes. This is achieved by balancing an exact representation of the known Green's function of regularized electrostatic problem with a discretized representation of the Laplace operator. It is shown that the exact calculation of the potential is possible independent of the order of the finite difference scheme but the computational efficiency for higher order methods is found to be superior due to a faster convergence to the exact result as a function of the charge support.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
NASA Astrophysics Data System (ADS)
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Effective orthorhombic anisotropic models for wavefield extrapolation
NASA Astrophysics Data System (ADS)
Ibanez-Jacome, Wilson; Alkhalifah, Tariq; Waheed, Umair bin
2014-09-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
Güçlü, Umut; van Gerven, Marcel A J
2017-01-15
Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.
3D shape recovery from image focus using Gabor features
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid
2018-04-01
Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.
Geospatial Representation, Analysis and Computing Using Bandlimited Functions
2010-02-19
navigation of aircraft and missiles require detailed representations of gravity and efficient methods for determining orbits and trajectories. However, many...efficient on today’s computers. Under this grant new, computationally efficient, localized representations of gravity have been developed and tested. As a...step in developing a new approach to estimating gravitational potentials, a multiresolution representation for gravity estimation has been proposed
A roadmap for improving the representation of photosynthesis in Earth System Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, Alistair; Medlyn, Belinda E.; Dukes, Jeffrey S.
Accurate representation of photosynthesis in terrestrial biosphere models (TBMs) is essential for robust projections of global change. However, current representations vary markedly between TBMs, contributing uncertainty projections of global carbon fluxes.
A roadmap for improving the representation of photosynthesis in Earth System Models
Rogers, Alistair; Medlyn, Belinda E.; Dukes, Jeffrey S.; ...
2016-11-28
Accurate representation of photosynthesis in terrestrial biosphere models (TBMs) is essential for robust projections of global change. However, current representations vary markedly between TBMs, contributing uncertainty projections of global carbon fluxes.
Edge-SIFT: discriminative binary descriptor for scalable partial-duplicate mobile search.
Zhang, Shiliang; Tian, Qi; Lu, Ke; Huang, Qingming; Gao, Wen
2013-07-01
As the basis of large-scale partial duplicate visual search on mobile devices, image local descriptor is expected to be discriminative, efficient, and compact. Our study shows that the popularly used histogram-based descriptors, such as scale invariant feature transform (SIFT) are not optimal for this task. This is mainly because histogram representation is relatively expensive to compute on mobile platforms and loses significant spatial clues, which are important for improving discriminative power and matching near-duplicate image patches. To address these issues, we propose to extract a novel binary local descriptor named Edge-SIFT from the binary edge maps of scale- and orientation-normalized image patches. By preserving both locations and orientations of edges and compressing the sparse binary edge maps with a boosting strategy, the final Edge-SIFT shows strong discriminative power with compact representation. Furthermore, we propose a fast similarity measurement and an indexing framework with flexible online verification. Hence, the Edge-SIFT allows an accurate and efficient image search and is ideal for computation sensitive scenarios such as a mobile image search. Experiments on a large-scale dataset manifest that the Edge-SIFT shows superior retrieval accuracy to Oriented BRIEF (ORB) and is superior to SIFT in the aspects of retrieval precision, efficiency, compactness, and transmission cost.
NASA Astrophysics Data System (ADS)
Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri
2014-03-01
We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.
Cognitive, perceptual and action-oriented representations of falling objects.
Zago, Myrka; Lacquaniti, Francesco
2005-01-01
We interact daily with moving objects. How accurate are our predictions about objects' motions? What sources of information do we use? These questions have received wide attention from a variety of different viewpoints. On one end of the spectrum are the ecological approaches assuming that all the information about the visual environment is present in the optic array, with no need to postulate conscious or unconscious representations. On the other end of the spectrum are the constructivist approaches assuming that a more or less accurate representation of the external world is built in the brain using explicit or implicit knowledge or memory besides sensory inputs. Representations can be related to naive physics or to context cue-heuristics or to the construction of internal copies of environmental invariants. We address the issue of prediction of objects' fall at different levels. Cognitive understanding and perceptual judgment of simple Newtonian dynamics can be surprisingly inaccurate. By contrast, motor interactions with falling objects are often very accurate. We argue that the pragmatic action-oriented behaviour and the perception-oriented behaviour may use different modes of operation and different levels of representation.
Knowledge of damage identification about tensegrities via flexibility disassembly
NASA Astrophysics Data System (ADS)
Jiang, Ge; Feng, Xiaodong; Du, Shigui
2017-12-01
Tensegrity structures composing of continuous cables and discrete struts are under tension and compression, respectively. In order to determine the damage extents of tensegrity structures, a new method for tensegrity structural damage identification is presented based on flexibility disassembly. To decompose a tensegrity structural flexibility matrix into the matrix represention of the connectivity between degress-of-freedoms and the diagonal matrix comprising of magnitude informations. Step 1: Calculate perturbation flexibility; Step 2: Compute the flexibility connectivity matrix and perturbation flexibility parameters; Step 3: Calculate the perturbation stiffness parameters. The efficiency of the proposed method is demonstrated by a numeical example comprising of 12 cables and 4 struts with pretensioned. Accurate identification of local damage depends on the availability of good measured data, an accurate and reasonable algorithm.
NASA Astrophysics Data System (ADS)
Ďuračiová, Renata; Rášová, Alexandra; Lieskovský, Tibor
2017-12-01
When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cates, J; Drzymala, R
2014-06-01
Purpose: The purpose of the study was to implement a method for accurate rat brain irradiation using the Gamma Knife Perfexion unit. The system needed to be repeatable, efficient, and dosimetrically and spatially accurate. Methods: A platform (“rat holder”) was made such that it is attachable to the Leskell Gamma Knife G Frame. The rat holder utilizes two ear bars contacting bony anatomy and a front tooth bar to secure the rat. The rat holder fits inside of the Leskell localizer box, which utilizes fiducial markers to register with the GammaPlan planning system. This method allows for accurate, repeatable setup.Amore » cylindrical phantom was made so that film can be placed axially in the phantom. We then acquired CT image sets of the rat holder and localizer box with both a rat and the phantom. Three treatment plans were created: a plan on the rat CT dataset, a phantom plan with the same prescription dose as the rat plan, and a phantom plan with the same delivery time as the rat plan. Results: Film analysis from the phantom showed that our setup is spatially accurate and repeatable. It is also dosimetrically accurate, with an difference between predicted and measured dose of 2.9%. Film analysis with prescription dose equal between rat and phantom plans showed a difference of 3.8%, showing that our phantom is a good representation of the rat for dosimetry purposes, allowing for +/- 3mm diameter variation. Film analysis with treatment time equal showed an error of 2.6%, which means we can deliver a prescription dose within 3% accuracy. Conclusion: Our method for irradiation of rat brain has been shown to be repeatable, efficient, and accurate, both dosimetrically and spatially. We can treat a large number of rats efficiently while delivering prescription doses within 3% at millimeter level accuracy.« less
Space-time interface-tracking with topology change (ST-TC)
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Buscher, Austin; Asada, Shohei
2014-10-01
To address the computational challenges associated with contact between moving interfaces, such as those in cardiovascular fluid-structure interaction (FSI), parachute FSI, and flapping-wing aerodynamics, we introduce a space-time (ST) interface-tracking method that can deal with topology change (TC). In cardiovascular FSI, our primary target is heart valves. The method is a new version of the deforming-spatial-domain/stabilized space-time (DSD/SST) method, and we call it ST-TC. It includes a master-slave system that maintains the connectivity of the "parent" mesh when there is contact between the moving interfaces. It is an efficient, practical alternative to using unstructured ST meshes, but without giving up on the accurate representation of the interface or consistent representation of the interface motion. We explain the method with conceptual examples and present 2D test computations with models representative of the classes of problems we are targeting.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
A Battery Health Monitoring Framework for Planetary Rovers
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Kulkarni, Chetan Shrikant
2014-01-01
Batteries have seen an increased use in electric ground and air vehicles for commercial, military, and space applications as the primary energy source. An important aspect of using batteries in such contexts is battery health monitoring. Batteries must be carefully monitored such that the battery health can be determined, and end of discharge and end of usable life events may be accurately predicted. For planetary rovers, battery health estimation and prediction is critical to mission planning and decision-making. We develop a model-based approach utilizing computaitonally efficient and accurate electrochemistry models of batteries. An unscented Kalman filter yields state estimates, which are then used to predict the future behavior of the batteries and, specifically, end of discharge. The prediction algorithm accounts for possible future power demands on the rover batteries in order to provide meaningful results and an accurate representation of prediction uncertainty. The framework is demonstrated on a set of lithium-ion batteries powering a rover at NASA.
An efficient and accurate molecular alignment and docking technique using ab initio quality scoring
Füsti-Molnár, László; Merz, Kenneth M.
2008-01-01
An accurate and efficient molecular alignment technique is presented based on first principle electronic structure calculations. This new scheme maximizes quantum similarity matrices in the relative orientation of the molecules and uses Fourier transform techniques for two purposes. First, building up the numerical representation of true ab initio electronic densities and their Coulomb potentials is accelerated by the previously described Fourier transform Coulomb method. Second, the Fourier convolution technique is applied for accelerating optimizations in the translational coordinates. In order to avoid any interpolation error, the necessary analytical formulas are derived for the transformation of the ab initio wavefunctions in rotational coordinates. The results of our first implementation for a small test set are analyzed in detail and compared with published results of the literature. A new way of refinement of existing shape based alignments is also proposed by using Fourier convolutions of ab initio or other approximate electron densities. This new alignment technique is generally applicable for overlap, Coulomb, kinetic energy, etc., quantum similarity measures and can be extended to a genuine docking solution with ab initio scoring. PMID:18624561
Li, Yachun; Charalampaki, Patra; Liu, Yong; Yang, Guang-Zhong; Giannarou, Stamatia
2018-06-13
Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures. The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods. We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%. This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. The performance evaluation analysis shows the potential clinical value of the technique.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
NASA Astrophysics Data System (ADS)
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Final Report for "Design calculations for high-space-charge beam-to-RF conversion".
DOE Office of Scientific and Technical Information (OSTI.GOV)
David N Smithe
2008-10-17
Accelerator facility upgrades, new accelerator applications, and future design efforts are leading to novel klystron and IOT device concepts, including multiple beam, high-order mode operation, and new geometry configurations of old concepts. At the same time, a new simulation capability, based upon finite-difference “cut-cell” boundaries, has emerged and is transforming the existing modeling and design capability with unparalleled realism, greater flexibility, and improved accuracy. This same new technology can also be brought to bear on a difficult-to-study aspect of the energy recovery linac (ERL), namely the accurate modeling of the exit beam, and design of the beam dump for optimummore » energy efficiency. We have developed new capability for design calculations and modeling of a broad class of devices which convert bunched beam kinetic energy to RF energy, including RF sources, as for example, klystrons, gyro-klystrons, IOT's, TWT’s, and other devices in which space-charge effects are important. Recent advances in geometry representation now permits very accurate representation of the curved metallic surfaces common to RF sources, resulting in unprecedented simulation accuracy. In the Phase I work, we evaluated and demonstrated the capabilities of the new geometry representation technology as applied to modeling and design of output cavity components of klystron, IOT's, and energy recovery srf cavities. We identified and prioritized which aspects of the design study process to pursue and improve in Phase II. The development and use of the new accurate geometry modeling technology on RF sources for DOE accelerators will help spark a new generational modeling and design capability, free from many of the constraints and inaccuracy associated with the previous generation of “stair-step” geometry modeling tools. This new capability is ultimately expected to impact all fields with high power RF sources, including DOE fusion research, communications, radar and other defense applications.« less
Frickenhaus, Stephan; Kannan, Srinivasaraghavan; Zacharias, Martin
2009-02-01
A direct conformational clustering and mapping approach for peptide conformations based on backbone dihedral angles has been developed and applied to compare conformational sampling of Met-enkephalin using two molecular dynamics (MD) methods. Efficient clustering in dihedrals has been achieved by evaluating all combinations resulting from independent clustering of each dihedral angle distribution, thus resolving all conformational substates. In contrast, Cartesian clustering was unable to accurately distinguish between all substates. Projection of clusters on dihedral principal component (PCA) subspaces did not result in efficient separation of highly populated clusters. However, representation in a nonlinear metric by Sammon mapping was able to separate well the 48 highest populated clusters in just two dimensions. In addition, this approach also allowed us to visualize the transition frequencies between clusters efficiently. Significantly, higher transition frequencies between more distinct conformational substates were found for a recently developed biasing-potential replica exchange MD simulation method allowing faster sampling of possible substates compared to conventional MD simulations. Although the number of theoretically possible clusters grows exponentially with peptide length, in practice, the number of clusters is only limited by the sampling size (typically much smaller), and therefore the method is well suited also for large systems. The approach could be useful to rapidly and accurately evaluate conformational sampling during MD simulations, to compare different sampling strategies and eventually to detect kinetic bottlenecks in folding pathways.
The effect of the wind tunnel wall boundary layer on the acoustic testing of propellers
NASA Technical Reports Server (NTRS)
Eversman, Walter
1989-01-01
An approximation based on the representation of the boundary layer by lamina of uniform flow with suitable interlayer boundary conditions is shown to be accurate, efficient, and compatible with finite element formulations. The approximation has been implemented using existing codes to produce a model for assessing the suitability of the acoustic environment in a wind tunnel for the acoustic testing of propellers. It is found that, with suitable acoustic treatment and with measurements made near the propeller and well removed from the walls, the free field directivity and level can be reproduced with good fidelity.
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
Building energy modeling for green architecture and intelligent dashboard applications
NASA Astrophysics Data System (ADS)
DeBlois, Justin
Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the representation of unpredictable occupancy patterns on model results. Combined, these studies inform modelers and researchers on frameworks for simulating holistically designed architecture and improving the interaction between models and building occupants, in residential and commercial settings. v
Effect of familiarity and viewpoint on face recognition in chimpanzees
Parr, Lisa A; Siebert, Erin; Taubert, Jessica
2012-01-01
Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions. PMID:22128558
An efficient basis set representation for calculating electrons in molecules
Jones, Jeremiah R.; Rouet, Francois -Henry; Lawler, Keith V.; ...
2016-04-27
The method of McCurdy, Baertschy, and Rescigno, is generalised to obtain a straightforward, surprisingly accurate, and scalable numerical representation for calculating the electronic wave functions of molecules. It uses a basis set of product sinc functions arrayed on a Cartesian grid, and yields 1 kcal/mol precision for valence transition energies with a grid resolution of approximately 0.1 bohr. The Coulomb matrix elements are replaced with matrix elements obtained from the kinetic energy operator. A resolution-of-the-identity approximation renders the primitive one- and two-electron matrix elements diagonal; in other words, the Coulomb operator is local with respect to the grid indices. Themore » calculation of contracted two-electron matrix elements among orbitals requires only O( Nlog (N)) multiplication operations, not O( N 4), where N is the number of basis functions; N = n 3 on cubic grids. The representation not only is numerically expedient, but also produces energies and properties superior to those calculated variationally. Absolute energies, absorption cross sections, transition energies, and ionisation potentials are reported for 1- (He +, H + 2), 2- (H 2, He), 10- (CH 4), and 56-electron (C 8H 8) systems.« less
Unfitted Two-Phase Flow Simulations in Pore-Geometries with Accurate
NASA Astrophysics Data System (ADS)
Heimann, Felix; Engwer, Christian; Ippisch, Olaf; Bastian, Peter
2013-04-01
The development of better macro scale models for multi-phase flow in porous media is still impeded by the lack of suitable methods for the simulation of such flow regimes on the pore scale. The highly complicated geometry of natural porous media imposes requirements with regard to stability and computational efficiency which current numerical methods fail to meet. Therefore, current simulation environments are still unable to provide a thorough understanding of porous media in multi-phase regimes and still fail to reproduce well known effects like hysteresis or the more peculiar dynamics of the capillary fringe with satisfying accuracy. Although flow simulations in pore geometries were initially the domain of Lattice-Boltzmann and other particle methods, the development of Galerkin methods for such applications is important as they complement the range of feasible flow and parameter regimes. In the recent past, it has been shown that unfitted Galerkin methods can be applied efficiently to topologically demanding geometries. However, in the context of two-phase flows, the interface of the two immiscible fluids effectively separates the domain in two sub-domains. The exact representation of such setups with multiple independent and time depending geometries exceeds the functionality of common unfitted methods. We present a new approach to pore scale simulations with an unfitted discontinuous Galerkin (UDG) method. Utilizing a recursive sub-triangulation algorithm, we extent the UDG method to setups with multiple independent geometries. This approach allows an accurate representation of the moving contact line and the interface conditions, i.e. the pressure jump across the interface. Example simulations in two and three dimensions illustrate and verify the stability and accuracy of this approach.
48 CFR 252.204-7007 - Alternate A, Annual Representations and Certifications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Online Representations and Certifications Application (ORCA) Web site at https://orca.bpn.gov/. After... months, are current, accurate, complete, and applicable to this solicitation (including the business size...
13 CFR 121.411 - What are the size procedures for SBA's Section 8(d) Subcontracting Program?
Code of Federal Regulations, 2014 CFR
2014-01-01
... or sanctioned by SBA) as an accurate representation of a concern's size and ownership characteristics... submission of the offer that the size or socioeconomic representations and certifications made in SAM (or any... determination may include the firm's internal management procedures governing size representation or...
ERIC Educational Resources Information Center
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
Hantush Well Function revisited
NASA Astrophysics Data System (ADS)
Veling, E. J. M.; Maas, C.
2010-11-01
SummaryIn this paper, we comment on some recent numerical and analytical work to evaluate the Hantush Well Function. We correct an expression found in a Comment by Nadarajah [Nadarajah, S., 2007. A comment on numerical evaluation of Theis and Hantush-Jacob well functions. Journal of Hydrology 338, 152-153] to a paper by Prodanoff et al. [Prodanoff, J.A., Mansur, W.J., Mascarenhas, F.C.B., 2006. Numerical evaluation of Theis and Hantush-Jacob well functions. Journal of Hydrology 318, 173-183]. We subsequently derived another analytic representation based on a generalized hypergeometric function in two variables and from the hydrological literature we cite an analytic representation by Hunt [Hunt, B., 1977. Calculation of the leaky aquifer function. Journal of Hydrology 33, 179-183]. We have implemented both representations and compared the results. Using a convergence accelerator Hunt's representation of Hantush Well Function is efficient and accurate. While checking our implementations we found that Bear's table of the Hantush Well Function [Bear, J., 1979. Hydraulics of Groundwater. McGraw-Hill, New York, Tables 8-6] contains a number of typographical errors that are not present in the original table published by Hantush [Hantush, M.S., 1956. Analysis of data from pumping tests in leaky aquifers. Transactions, American Geophysical Union 37, 702-714]. Finally, we offer a very fast approximation with a maximum relative error of 0.0033 for the parameter range in the table given by Bear.
NASA Astrophysics Data System (ADS)
Javed, U.; Abdelkefi, A.
2017-07-01
One of the challenging tasks in the analytical modeling of galloping systems is the representation of the galloping force. In this study, the impacts of using different aerodynamic load representations on the dynamics of galloping oscillations are investigated. A distributed-parameter model is considered to determine the response of a galloping energy harvester subjected to a uniform wind speed. For the same experimental data and conditions, various polynomial expressions for the galloping force are proposed in order to determine the possible differences in the variations of the harvester's outputs as well as the type of instability. For the same experimental data of the galloping force, it is demonstrated that the choice of the coefficients of the polynomial approximation may result in a change in the type of bifurcation, the tip displacement and harvested power amplitudes. A parametric study is then performed to investigate the effects of the electrical load resistance on the harvester's performance when considering different possible representations of the aerodynamic force. It is indicated that for low and high values of the electrical resistance, there is an increase in the range of wind speeds where the response of the energy harvester is not affected. The performed analysis shows the importance of accurately representing the galloping force in order to efficiently design piezoelectric energy harvesters.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
An Improved Representation of Regional Boundaries on Parcellated Morphological Surfaces
Hao, Xuejun; Xu, Dongrong; Bansal, Ravi; Liu, Jun; Peterson, Bradley S.
2010-01-01
Establishing the correspondences of brain anatomy with function is important for understanding neuroimaging data. Regional delineations on morphological surfaces define anatomical landmarks and help to visualize and interpret both functional data and morphological measures mapped onto the cortical surface. We present an efficient algorithm that accurately delineates the morphological surface of the cerebral cortex in real time during generation of the surface using information from parcellated 3D data. With this accurate region delineation, we then develop methods for boundary-preserved simplification and smoothing, as well as procedures for the automated correction of small, misclassified regions to improve the quality of the delineated surface. We demonstrate that our delineation algorithm, together with a new method for double-snapshot visualization of cortical regions, can be used to establish a clear correspondence between brain anatomy and mapped quantities, such as morphological measures, across groups of subjects. PMID:21144708
Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.
Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen
2018-02-02
The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.
Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware
Municio, Esteban; Van de Velde, Bruno; Latré, Steven
2018-01-01
The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks. PMID:29393900
Takashima, Atsuko; Hulzink, Iris; Wagensveld, Barbara; Verhoeven, Ludo
2016-08-01
Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later. Copyright © 2016 Elsevier Ltd. All rights reserved.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
NASA Astrophysics Data System (ADS)
Raczka, B. M.; Bowling, D. R.; Lin, J. C.; Lee, J. E.; Yang, X.; Duarte, H.; Zuromski, L.
2017-12-01
Forests of the Western United States are prone to drought, temperature extremes, forest fires and insect infestation. These disturbance render carbon stocks and land-atmosphere carbon exchanges highly variable and vulnerable to change. Regional estimates of carbon exchange from terrestrial ecosystem models are challenged, in part, by a lack of net ecosystem exchange observations (e.g. flux towers) due to the complex mountainous terrain. Alternatively, carbon estimates based on light use efficiency models that depend upon remotely-sensed greenness indices are challenged due to a weak relationship with GPP during the winter season. Recent advances in the retrieval of remotely sensed solar induced fluorescence (SIF) have demonstrated a strong seasonal relationship between GPP and SIF for deciduous, grass and, to a lesser extent, conifer species. This provides an important opportunity to use remotely-sensed SIF to calibrate terrestrial ecosystem models providing a more accurate regional representation of biomass and carbon exchange across mountainous terrain. Here we incorporate both leaf-level fluorescence and leaf-to-canopy radiative transfer represented by the SCOPE model into CLM 4.5 (CLM-SIF). We simulate canopy level fluorescence at a sub-alpine forest site (Niwot Ridge, Colorado) and test whether these simulations reproduce remotely-sensed SIF from a satellite (GOME2). We found that the average peak SIF during the growing season (yrs 2007-2013) was similar between the model and satellite observations (within 15%); however, simulated SIF during the winter season was significantly greater than the satellite observations (5x higher). This implies that the fluorescence yield is overestimated by the model during the winter season. It is important that the modeled representation of seasonal fluorescence yield is improved to provide an accurate seasonal representation of SIF across the Western United States.
Efficient summary statistical representation when change localization fails.
Haberman, Jason; Whitney, David
2011-10-01
People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or "gist") about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.
Travnik, Jaden B; Pilarski, Patrick M
2017-07-01
Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.
A new algorithm for construction of coarse-grained sites of large biomolecules.
Li, Min; Zhang, John Z H; Xia, Fei
2016-04-05
The development of coarse-grained (CG) models for large biomolecules remains a challenge in multiscale simulations, including a rigorous definition of CG representations for them. In this work, we proposed a new stepwise optimization imposed with the boundary-constraint (SOBC) algorithm to construct the CG sites of large biomolecules, based on the s cheme of essential dynamics CG. By means of SOBC, we can rigorously derive the CG representations of biomolecules with less computational cost. The SOBC is particularly efficient for the CG definition of large systems with thousands of residues. The resulted CG sites can be parameterized as a CG model using the normal mode analysis based fluctuation matching method. Through normal mode analysis, the obtained modes of CG model can accurately reflect the functionally related slow motions of biomolecules. The SOBC algorithm can be used for the construction of CG sites of large biomolecules such as F-actin and for the study of mechanical properties of biomaterials. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shakiba, Maryam; Ozer, Hasan; Ziyadi, Mojtaba; Al-Qadi, Imad L.
2016-11-01
The structure-induced rolling resistance of pavements, and its impact on vehicle fuel consumption, is investigated in this study. The structural response of pavement causes additional rolling resistance and fuel consumption of vehicles through deformation of pavement and various dissipation mechanisms associated with inelastic material properties and damping. Accurate and computationally efficient models are required to capture these mechanisms and obtain realistic estimates of changes in vehicle fuel consumption. Two mechanistic-based approaches are currently used to calculate vehicle fuel consumption as related to structural rolling resistance: dissipation-induced and deflection-induced methods. The deflection-induced approach is adopted in this study, and realistic representation of pavement-vehicle interactions (PVIs) is incorporated. In addition to considering viscoelastic behavior of asphalt concrete layers, the realistic representation of PVIs in this study includes non-uniform three-dimensional tire contact stresses and dynamic analysis in pavement simulations. The effects of analysis type, tire contact stresses, pavement viscoelastic properties, pavement damping coefficients, vehicle speed, and pavement temperature are then investigated.
Lanczos algorithm with matrix product states for dynamical correlation functions
NASA Astrophysics Data System (ADS)
Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.
2012-05-01
The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.
Predicting successful tactile mapping of virtual objects.
Brayda, Luca; Campus, Claudio; Gori, Monica
2013-01-01
Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.
When Does Changing Representation Improve Problem-Solving Performance?
NASA Technical Reports Server (NTRS)
Holte, Robert; Zimmer, Robert; MacDonald, Alan
1992-01-01
The aim of changing representation is the improvement of problem-solving efficiency. For the most widely studied family of methods of change of representation it is shown that the value of a single parameter, called the expulsion factor, is critical in determining (1) whether the change of representation will improve or degrade problem-solving efficiency and (2) whether the solutions produced using the change of representation will or will not be exponentially longer than the shortest solution. A method of computing the expansion factor for a given change of representation is sketched in general and described in detail for homomorphic changes of representation. The results are illustrated with homomorphic decompositions of the Towers of Hanoi problem.
Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; ...
2016-09-07
CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil–canopy processes by introducing an alternative hydrology model with amore » physically accurate representation of coupled energy and water fluxes at the soil–air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Here, marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.« less
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.
CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil–canopy processes by introducing an alternative hydrology model with amore » physically accurate representation of coupled energy and water fluxes at the soil–air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Here, marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.« less
NASA Astrophysics Data System (ADS)
Haverd, Vanessa; Cuntz, Matthias; Nieradzik, Lars P.; Harman, Ian N.
2016-09-01
CABLE is a global land surface model, which has been used extensively in offline and coupled simulations. While CABLE performs well in comparison with other land surface models, results are impacted by decoupling of transpiration and photosynthesis fluxes under drying soil conditions, often leading to implausibly high water use efficiencies. Here, we present a solution to this problem, ensuring that modelled transpiration is always consistent with modelled photosynthesis, while introducing a parsimonious single-parameter drought response function which is coupled to root water uptake. We further improve CABLE's simulation of coupled soil-canopy processes by introducing an alternative hydrology model with a physically accurate representation of coupled energy and water fluxes at the soil-air interface, including a more realistic formulation of transfer under atmospherically stable conditions within the canopy and in the presence of leaf litter. The effects of these model developments are assessed using data from 18 stations from the global eddy covariance FLUXNET database, selected to span a large climatic range. Marked improvements are demonstrated, with root mean squared errors for monthly latent heat fluxes and water use efficiencies being reduced by 40 %. Results highlight the important roles of deep soil moisture in mediating drought response and litter in dampening soil evaporation.
Investigation of tDCS volume conduction effects in a highly realistic head model
NASA Astrophysics Data System (ADS)
Wagner, S.; Rampersad, S. M.; Aydin, Ü.; Vorwerk, J.; Oostendorp, T. F.; Neuling, T.; Herrmann, C. S.; Stegeman, D. F.; Wolters, C. H.
2014-02-01
Objective. We investigate volume conduction effects in transcranial direct current stimulation (tDCS) and present a guideline for efficient and yet accurate volume conductor modeling in tDCS using our newly-developed finite element (FE) approach. Approach. We developed a new, accurate and fast isoparametric FE approach for high-resolution geometry-adapted hexahedral meshes and tissue anisotropy. To attain a deeper insight into tDCS, we performed computer simulations, starting with a homogenized three-compartment head model and extending this step by step to a six-compartment anisotropic model. Main results. We are able to demonstrate important tDCS effects. First, we find channeling effects of the skin, the skull spongiosa and the cerebrospinal fluid compartments. Second, current vectors tend to be oriented towards the closest higher conducting region. Third, anisotropic WM conductivity causes current flow in directions more parallel to the WM fiber tracts. Fourth, the highest cortical current magnitudes are not only found close to the stimulation sites. Fifth, the median brain current density decreases with increasing distance from the electrodes. Significance. Our results allow us to formulate a guideline for volume conductor modeling in tDCS. We recommend to accurately model the major tissues between the stimulating electrodes and the target areas, while for efficient yet accurate modeling, an exact representation of other tissues is less important. Because for the low-frequency regime in electrophysiology the quasi-static approach is justified, our results should also be valid for at least low-frequency (e.g., below 100 Hz) transcranial alternating current stimulation.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Density-functional theory simulation of large quantum dots
NASA Astrophysics Data System (ADS)
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product
NASA Astrophysics Data System (ADS)
Weyrauch, Michael; Scholz, Daniel
2009-09-01
The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-01
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.
ERIC Educational Resources Information Center
Al Ghanem, Reem
2017-01-01
Accurate and rapid word recognition requires highly-specified phonological, orthographic, and semantic word-specific representations. It has been established that children acquire these representations through phonological decoding in a process known as orthographic learning. Studies examining orthographic learning and its predictors have thus far…
Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth
Just, Marcel Adam; Pan, Lisa; Cherkassky, Vladimir L.; McMakin, Dana; Cha, Christine; Nock, Matthew K.; Brent, David
2017-01-01
The clinical assessment of suicidal risk would be significantly complemented by a biologically-based measure that assesses alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. This study used machine-learning algorithms (Gaussian Naïve Bayes) to identify such individuals (17 suicidal ideators vs 17 controls) with high (91%) accuracy, based on their altered fMRI neural signatures of death and life-related concepts. The most discriminating concepts were death, cruelty, trouble, carefree, good, and praise. A similar classification accurately (94%) discriminated 9 suicidal ideators who had made a suicide attempt from 8 who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification. The study establishes a biological, neurocognitive basis for altered concept representations in participants with suicidal ideation, which enables highly accurate group membership classification. PMID:29367952
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Detailed 3D representations for object recognition and modeling.
Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad
2013-11-01
Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.
Grid cells form a global representation of connected environments.
Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell
2015-05-04
The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5-8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9-11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Grid Cells Form a Global Representation of Connected Environments
Carpenter, Francis; Manson, Daniel; Jeffery, Kate; Burgess, Neil; Barry, Caswell
2015-01-01
Summary The firing patterns of grid cells in medial entorhinal cortex (mEC) and associated brain areas form triangular arrays that tessellate the environment [1, 2] and maintain constant spatial offsets to each other between environments [3, 4]. These cells are thought to provide an efficient metric for navigation in large-scale space [5–8]. However, an accurate and universal metric requires grid cell firing patterns to uniformly cover the space to be navigated, in contrast to recent demonstrations that environmental features such as boundaries can distort [9–11] and fragment [12] grid patterns. To establish whether grid firing is determined by local environmental cues, or provides a coherent global representation, we recorded mEC grid cells in rats foraging in an environment containing two perceptually identical compartments connected via a corridor. During initial exposures to the multicompartment environment, grid firing patterns were dominated by local environmental cues, replicating between the two compartments. However, with prolonged experience, grid cell firing patterns formed a single, continuous representation that spanned both compartments. Thus, we provide the first evidence that in a complex environment, grid cell firing can form the coherent global pattern necessary for them to act as a metric capable of supporting large-scale spatial navigation. PMID:25913404
Supervoxels for graph cuts-based deformable image registration using guided image filtering
NASA Astrophysics Data System (ADS)
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-11-01
We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.
Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A
2017-10-04
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering
Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.
2017-01-01
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433
Advances in Distance-Based Hole Cuts on Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Pandya, Shishir A.
2015-01-01
An automatic and efficient method to determine appropriate hole cuts based on distances to the wall and donor stencil maps for overset grids is presented. A new robust procedure is developed to create a closed surface triangulation representation of each geometric component for accurate determination of the minimum hole. Hole boundaries are then displaced away from the tight grid-spacing regions near solid walls to allow grid overlap to occur away from the walls where cell sizes from neighboring grids are more comparable. The placement of hole boundaries is efficiently determined using a mid-distance rule and Cartesian maps of potential valid donor stencils with minimal user input. Application of this procedure typically results in a spatially-variable offset of the hole boundaries from the minimum hole with only a small number of orphan points remaining. Test cases on complex configurations are presented to demonstrate the new scheme.
Inference of alternative splicing from RNA-Seq data with probabilistic splice graphs
LeGault, Laura H.; Dewey, Colin N.
2013-01-01
Motivation: Alternative splicing and other processes that allow for different transcripts to be derived from the same gene are significant forces in the eukaryotic cell. RNA-Seq is a promising technology for analyzing alternative transcripts, as it does not require prior knowledge of transcript structures or genome sequences. However, analysis of RNA-Seq data in the presence of genes with large numbers of alternative transcripts is currently challenging due to efficiency, identifiability and representation issues. Results: We present RNA-Seq models and associated inference algorithms based on the concept of probabilistic splice graphs, which alleviate these issues. We prove that our models are often identifiable and demonstrate that our inference methods for quantification and differential processing detection are efficient and accurate. Availability: Software implementing our methods is available at http://deweylab.biostat.wisc.edu/psginfer. Contact: cdewey@biostat.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23846746
A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences
Wang, Zhimu; Huang, Yingxiang; Wang, Shuang; Wang, Fei; Jiang, Xiaoqian
2016-01-01
Background Medical concepts are inherently ambiguous and error-prone due to human fallibility, which makes it hard for them to be fully used by classical machine learning methods (eg, for tasks like early stage disease prediction). Objective Our work was to create a new machine-friendly representation that resembles the semantics of medical concepts. We then developed a sequential predictive model for medical events based on this new representation. Methods We developed novel contextual embedding techniques to combine different medical events (eg, diagnoses, prescriptions, and labs tests). Each medical event is converted into a numerical vector that resembles its “semantics,” via which the similarity between medical events can be easily measured. We developed simple and effective predictive models based on these vectors to predict novel diagnoses. Results We evaluated our sequential prediction model (and standard learning methods) in estimating the risk of potential diseases based on our contextual embedding representation. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.79 on chronic systolic heart failure and an average AUC of 0.67 (over the 80 most common diagnoses) using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Conclusions We propose a general early prognosis predictor for 80 different diagnoses. Our method computes numeric representation for each medical event to uncover the potential meaning of those events. Our results demonstrate the efficiency of the proposed method, which will benefit patients and physicians by offering more accurate diagnosis. PMID:27888170
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less
Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*
Jian, Bing; Vemuri, Baba C.; Marroquin, José L.
2008-01-01
Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.
Code of Federal Regulations, 2012 CFR
2012-04-01
... document is true and accurate and I assume the responsibility for proving such representations. I... representations; The goods comply with all the requirements for preferential tariff treatment specified for those... responsible official of the importer or by the importer's authorized agent having knowledge of the relevant...
Code of Federal Regulations, 2011 CFR
2011-04-01
... document is true and accurate and I assume the responsibility for proving such representations. I... representations; The goods comply with all the requirements for preferential tariff treatment specified for those... responsible official of the importer or by the importer's authorized agent having knowledge of the relevant...
Code of Federal Regulations, 2014 CFR
2014-04-01
... document is true and accurate and I assume the responsibility for proving such representations. I... representations; The goods comply with all the requirements for preferential tariff treatment specified for those... responsible official of the importer or by the importer's authorized agent having knowledge of the relevant...
Code of Federal Regulations, 2010 CFR
2010-04-01
... document is true and accurate and I assume the responsibility for proving such representations. I... representations; The goods comply with all the requirements for preferential tariff treatment specified for those... responsible official of the importer or by the importer's authorized agent having knowledge of the relevant...
40 CFR Appendix A to Part 211 - Compliance Audit Testing Report
Code of Federal Regulations, 2013 CFR
2013-07-01
... accurate representations of this testing. All other information reported here is, to the best of (company name) and (test laboratory name) knowledge, true and accurate. I am aware of the penalties associated...
40 CFR Appendix A to Part 211 - Compliance Audit Testing Report
Code of Federal Regulations, 2012 CFR
2012-07-01
... accurate representations of this testing. All other information reported here is, to the best of (company name) and (test laboratory name) knowledge, true and accurate. I am aware of the penalties associated...
NASA Astrophysics Data System (ADS)
Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei
2009-10-01
In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.
Learning, memory, and the role of neural network architecture.
Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M
2011-06-01
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.
Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay
2013-07-01
In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.
Prediction of sound radiated from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1980-01-01
Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.
State Transition Matrix for Perturbed Orbital Motion Using Modified Chebyshev Picard Iteration
NASA Astrophysics Data System (ADS)
Read, Julie L.; Younes, Ahmad Bani; Macomber, Brent; Turner, James; Junkins, John L.
2015-06-01
The Modified Chebyshev Picard Iteration (MCPI) method has recently proven to be highly efficient for a given accuracy compared to several commonly adopted numerical integration methods, as a means to solve for perturbed orbital motion. This method utilizes Picard iteration, which generates a sequence of path approximations, and Chebyshev Polynomials, which are orthogonal and also enable both efficient and accurate function approximation. The nodes consistent with discrete Chebyshev orthogonality are generated using cosine sampling; this strategy also reduces the Runge effect and as a consequence of orthogonality, there is no matrix inversion required to find the basis function coefficients. The MCPI algorithms considered herein are parallel-structured so that they are immediately well-suited for massively parallel implementation with additional speedup. MCPI has a wide range of applications beyond ephemeris propagation, including the propagation of the State Transition Matrix (STM) for perturbed two-body motion. A solution is achieved for a spherical harmonic series representation of earth gravity (EGM2008), although the methodology is suitable for application to any gravity model. Included in this representation the normalized, Associated Legendre Functions are given and verified numerically. Modifications of the classical algorithm techniques, such as rewriting the STM equations in a second-order cascade formulation, gives rise to additional speedup. Timing results for the baseline formulation and this second-order formulation are given.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Paesani, Francesco
2016-09-20
The central role played by water in fundamental processes relevant to different disciplines, including chemistry, physics, biology, materials science, geology, and climate research, cannot be overemphasized. It is thus not surprising that, since the pioneering work by Stillinger and Rahman, many theoretical and computational studies have attempted to develop a microscopic description of the unique properties of water under different thermodynamic conditions. Consequently, numerous molecular models based on either molecular mechanics or ab initio approaches have been proposed over the years. However, despite continued progress, the correct prediction of the properties of water from small gas-phase clusters to the liquid phase and ice through a single molecular model remains challenging. To large extent, this is due to the difficulties encountered in the accurate modeling of the underlying hydrogen-bond network in which both number and strength of the hydrogen bonds vary continuously as a result of a subtle interplay between energetic, entropic, and nuclear quantum effects. In the past decade, the development of efficient algorithms for correlated electronic structure calculations of small molecular complexes, accompanied by tremendous progress in the analytical representation of multidimensional potential energy surfaces, opened the doors to the design of highly accurate potential energy functions built upon rigorous representations of the many-body expansion (MBE) of the interaction energies. This Account provides a critical overview of the performance of the MB-pol many-body potential energy function through a systematic analysis of energetic, structural, thermodynamic, and dynamical properties as well as of vibrational spectra of water from the gas to the condensed phase. It is shown that MB-pol achieves unprecedented accuracy across all phases of water through a quantitative description of each individual term of the MBE, with a physically correct representation of both short- and long-range many-body contributions. Comparisons with experimental data probing different regions of the water potential energy surface from clusters to bulk demonstrate that MB-pol represents a major step toward the long-sought-after "universal model" capable of accurately describing the molecular properties of water under different conditions and in different environments. Along this path, future challenges include the extension of the many-body scheme adopted by MB-pol to the description of generic solutes as well as the integration of MB-pol in an efficient theoretical and computational framework to model acid-base reactions in aqueous environments. In this context, given the nontraditional form of the MB-pol energy and force expressions, synergistic efforts by theoretical/computational chemists/physicists and computer scientists will be critical for the development of high-performance software for many-body molecular dynamics simulations.
more accurately modeling interchange pricing rules between Regional Transmission Organizations. Areas Market to market coordination between Regional Transmission Organizations Research Interests Modeling the Indian power system with improved transmission representation to more accurately predict RE integration
Sparse representation of electrodermal activity with knowledge-driven dictionaries.
Chaspari, Theodora; Tsiartas, Andreas; Stein, Leah I; Cermak, Sharon A; Narayanan, Shrikanth S
2015-03-01
Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features.
Mapped grid methods for long-range molecules and cold collisions
NASA Astrophysics Data System (ADS)
Willner, K.; Dulieu, O.; Masnou-Seeuws, F.
2004-01-01
The paper discusses ways of improving the accuracy of numerical calculations for vibrational levels of diatomic molecules close to the dissociation limit or for ultracold collisions, in the framework of a grid representation. In order to avoid the implementation of very large grids, Kokoouline et al. [J. Chem. Phys. 110, 9865 (1999)] have proposed a mapping procedure through introduction of an adaptive coordinate x subjected to the variation of the local de Broglie wavelength as a function of the internuclear distance R. Some unphysical levels ("ghosts") then appear in the vibrational series computed via a mapped Fourier grid representation. In the present work the choice of the basis set is reexamined, and two alternative expansions are discussed: Sine functions and Hardy functions. It is shown that use of a basis set with fixed nodes at both grid ends is efficient to eliminate "ghost" solutions. It is further shown that the Hamiltonian matrix in the sine basis can be calculated very accurately by using an auxiliary basis of cosine functions, overcoming the problems arising from numerical calculation of the Jacobian J(x) of the R→x coordinate transformation.
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.
Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef
2018-06-01
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Systems Biology Graphical Notation: Entity Relationship language Level 1 Version 2.
Sorokin, Anatoly; Le Novère, Nicolas; Luna, Augustin; Czauderna, Tobias; Demir, Emek; Haw, Robin; Mi, Huaiyu; Moodie, Stuart; Schreiber, Falk; Villéger, Alice
2015-09-04
The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Entity Relationship language (ER) represents biological entities and their interactions and relationships within a network. SBGN ER focuses on all potential relationships between entities without considering temporal aspects. The nodes (elements) describe biological entities, such as proteins and complexes. The edges (connections) provide descriptions of interactions and relationships (or influences), e.g., complex formation, stimulation and inhibition. Among all three languages of SBGN, ER is the closest to protein interaction networks in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William D; Johansen, Hans; Evans, Katherine J
We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; Aitharaju, Venkat
Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it failsmore » to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.« less
NASA Astrophysics Data System (ADS)
Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd
2018-04-01
A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.
ERIC Educational Resources Information Center
Guan, Connie Qun; Liu, Ying; Chan, Derek Ho Leung; Ye, Feifei; Perfetti, Charles A.
2011-01-01
Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form. We report 2 studies that test this hypothesis in…
A Tale of Two Representations: The Misinformation Effect and Children's Developing Theory of Mind.
ERIC Educational Resources Information Center
Templeton, Leslie M.; Wilcox, Sharon A.
2000-01-01
Investigated children's representational ability as a cognitive factor underlying the suggestibility of their eyewitness memory. Found that the eyewitness memory of children lacking multirepresentational abilities or sufficient general memory abilities (most 3- and 4-year-olds) was less accurate than eyewitness memory of those with…
Microwave Workshop for Windows.
ERIC Educational Resources Information Center
White, Colin
1998-01-01
"Microwave Workshop for Windows" consists of three programs that act as teaching aid and provide a circuit design utility within the field of microwave engineering. The first program is a computer representation of a graphical design tool; the second is an accurate visual and analytical representation of a microwave test bench; the third…
A unified data representation theory for network visualization, ordering and coarse-graining
Kovács, István A.; Mizsei, Réka; Csermely, Péter
2015-01-01
Representation of large data sets became a key question of many scientific disciplines in the last decade. Several approaches for network visualization, data ordering and coarse-graining accomplished this goal. However, there was no underlying theoretical framework linking these problems. Here we show an elegant, information theoretic data representation approach as a unified solution of network visualization, data ordering and coarse-graining. The optimal representation is the hardest to distinguish from the original data matrix, measured by the relative entropy. The representation of network nodes as probability distributions provides an efficient visualization method and, in one dimension, an ordering of network nodes and edges. Coarse-grained representations of the input network enable both efficient data compression and hierarchical visualization to achieve high quality representations of larger data sets. Our unified data representation theory will help the analysis of extensive data sets, by revealing the large-scale structure of complex networks in a comprehensible form. PMID:26348923
Age, familiarity, and visual processing schemes.
De Haven, D T; Roberts-Gray, C
1978-10-01
In a partial-report task adults and 5-yr.-old children identified stimuli of two types (common objects and familiar common objects) in two representations (black-and-white line drawings or full color photographs). It was hypothesized that familiar items and photographic representation would enhance the children's accuracy. Although both children and adults were more accurate when the stimuli were from the familiar set, children performed more accurate when the stimuli were from the familiar set, children performed poorly in all stimulus conditions. Results suggest that the age difference in this task reflects the "concrete" nature of the perceptual process in children.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
NASA Astrophysics Data System (ADS)
Alipour, M.; Kibler, K. M.
2017-12-01
Despite advances in flow prediction, managers of ungauged rivers located within broad regions of sparse hydrometeorologic observation still lack prescriptive methods robust to the data challenges of such regions. We propose a multi-objective streamflow prediction framework for regions of minimum observation to select models that balance runoff efficiency with choice of accurate parameter values. We supplement sparse observed data with uncertain or low-resolution information incorporated as `soft' a priori parameter estimates. The performance of the proposed framework is tested against traditional single-objective and constrained single-objective calibrations in two catchments in a remote area of southwestern China. We find that the multi-objective approach performs well with respect to runoff efficiency in both catchments (NSE = 0.74 and 0.72), within the range of efficiencies returned by other models (NSE = 0.67 - 0.78). However, soil moisture capacity estimated by the multi-objective model resonates with a priori estimates (parameter residuals of 61 cm versus 289 and 518 cm for maximum soil moisture capacity in one catchment, and 20 cm versus 246 and 475 cm in the other; parameter residuals of 0.48 versus 0.65 and 0.7 for soil moisture distribution shape factor in one catchment, and 0.91 versus 0.79 and 1.24 in the other). Thus, optimization to a multi-criteria objective function led to very different representations of soil moisture capacity as compared to models selected by single-objective calibration, without compromising runoff efficiency. These different soil moisture representations may translate into considerably different hydrological behaviors. The proposed approach thus offers a preliminary step towards greater process understanding in regions of severe data limitations. For instance, the multi-objective framework may be an adept tool to discern between models of similar efficiency to select models that provide the "right answers for the right reasons". Managers may feel more confident to utilize such models to predict flows in fully ungauged areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Dictionary Learning Algorithms for Sparse Representation
Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811
Prostate segmentation by sparse representation based classification
Gao, Yaozong; Liao, Shu; Shen, Dinggang
2012-01-01
Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673
Prostate segmentation by sparse representation based classification.
Gao, Yaozong; Liao, Shu; Shen, Dinggang
2012-10-01
The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Striped Data Server for Scalable Parallel Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jin; Gutsche, Oliver; Mandrichenko, Igor
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approachmore » allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.« less
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Semantic representation of reported measurements in radiology.
Oberkampf, Heiner; Zillner, Sonja; Overton, James A; Bauer, Bernhard; Cavallaro, Alexander; Uder, Michael; Hammon, Matthias
2016-01-22
In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time. We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect. The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.
Operations automation using temporal dependency networks
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.
1991-01-01
Precalibration activities for the Deep Space Network are time- and work force-intensive. Significant gains in availability and efficiency could be realized by intelligently incorporating automation techniques. An approach is presented to automation based on the use of Temporal Dependency Networks (TDNs). A TDN represents an activity by breaking it down into its component pieces and formalizing the precedence and other constraints associated with lower level activities. The representations are described which are used to implement a TDN and the underlying system architecture needed to support its use. The commercial applications of this technique are numerous. It has potential for application in any system which requires real-time, system-level control, and accurate monitoring of health, status, and configuration in an asynchronous environment.
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro
2018-02-01
The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.
A simple filter circuit for denoising biomechanical impact signals.
Subramaniam, Suba R; Georgakis, Apostolos
2009-01-01
We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.
Modification of land-atmosphere interactions by CO2 effects
NASA Astrophysics Data System (ADS)
Lemordant, Leo; Gentine, Pierre
2017-04-01
Plant stomata couple the energy, water and carbon cycles. Increased CO2 modifies the seasonality of the water cycle through stomatal regulation and increased leaf area. As a result, the water saved during the growing season through higher water use efficiency mitigates summer dryness and the impact of potential heat waves. Land-atmosphere interactions and CO2 fertilization together synergistically contribute to increased summer transpiration. This, in turn, alters the surface energy budget and decreases sensible heat flux, mitigating air temperature rise. Accurate representation of the response to higher CO2 levels, and of the coupling between the carbon and water cycles are therefore critical to forecasting seasonal climate, water cycle dynamics and to enhance the accuracy of extreme event prediction under future climate.
Stereotypes and Representations of Aging in the Media
ERIC Educational Resources Information Center
Mason, Susan E.; Darnell, Emily A.; Prifti, Krisiola
2010-01-01
How are older adults presented in print and in the electronic media? Are they underrepresented? Are they accurately portrayed? Based on our examination of several forms of media over a four-month period, we discuss the role of the media in shaping our views on aging. Quantitative and qualitative analyses reveal that media representations often…
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
Temporal efficiency evaluation and small-worldness characterization in temporal networks
Dai, Zhongxiang; Chen, Yu; Li, Junhua; Fam, Johnson; Bezerianos, Anastasios; Sun, Yu
2016-01-01
Numerous real-world systems can be modeled as networks. To date, most network studies have been conducted assuming stationary network characteristics. Many systems, however, undergo topological changes over time. Temporal networks, which incorporate time into conventional network models, are therefore more accurate representations of such dynamic systems. Here, we introduce a novel generalized analytical framework for temporal networks, which enables 1) robust evaluation of the efficiency of temporal information exchange using two new network metrics and 2) quantitative inspection of the temporal small-worldness. Specifically, we define new robust temporal network efficiency measures by incorporating the time dependency of temporal distance. We propose a temporal regular network model, and based on this plus the redefined temporal efficiency metrics and widely used temporal random network models, we introduce a quantitative approach for identifying temporal small-world architectures (featuring high temporal network efficiency both globally and locally). In addition, within this framework, we can uncover network-specific dynamic structures. Applications to brain networks, international trade networks, and social networks reveal prominent temporal small-world properties with distinct dynamic network structures. We believe that the framework can provide further insight into dynamic changes in the network topology of various real-world systems and significantly promote research on temporal networks. PMID:27682314
Temporal efficiency evaluation and small-worldness characterization in temporal networks
NASA Astrophysics Data System (ADS)
Dai, Zhongxiang; Chen, Yu; Li, Junhua; Fam, Johnson; Bezerianos, Anastasios; Sun, Yu
2016-09-01
Numerous real-world systems can be modeled as networks. To date, most network studies have been conducted assuming stationary network characteristics. Many systems, however, undergo topological changes over time. Temporal networks, which incorporate time into conventional network models, are therefore more accurate representations of such dynamic systems. Here, we introduce a novel generalized analytical framework for temporal networks, which enables 1) robust evaluation of the efficiency of temporal information exchange using two new network metrics and 2) quantitative inspection of the temporal small-worldness. Specifically, we define new robust temporal network efficiency measures by incorporating the time dependency of temporal distance. We propose a temporal regular network model, and based on this plus the redefined temporal efficiency metrics and widely used temporal random network models, we introduce a quantitative approach for identifying temporal small-world architectures (featuring high temporal network efficiency both globally and locally). In addition, within this framework, we can uncover network-specific dynamic structures. Applications to brain networks, international trade networks, and social networks reveal prominent temporal small-world properties with distinct dynamic network structures. We believe that the framework can provide further insight into dynamic changes in the network topology of various real-world systems and significantly promote research on temporal networks.
NASA Astrophysics Data System (ADS)
Dong, Weihua; Liao, Hua
2016-06-01
Despite the now-ubiquitous two-dimensional (2D) maps, photorealistic three-dimensional (3D) representations of cities (e.g., Google Earth) have gained much attention by scientists and public users as another option. However, there is no consistent evidence on the influences of 3D photorealism on pedestrian navigation. Whether 3D photorealism can communicate cartographic information for navigation with higher effectiveness and efficiency and lower cognitive workload compared to the traditional symbolic 2D maps remains unknown. This study aims to explore whether the photorealistic 3D representation can facilitate processes of map reading and navigation in digital environments using a lab-based eye tracking approach. Here we show the differences of symbolic 2D maps versus photorealistic 3D representations depending on users' eye-movement and navigation behaviour data. We found that the participants using the 3D representation were less effective, less efficient and were required higher cognitive workload than using the 2D map for map reading. However, participants using the 3D representation performed more efficiently in self-localization and orientation at the complex decision points. The empirical results can be helpful to improve the usability of pedestrian navigation maps in future designs.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
Graphical tensor product reduction scheme for the Lie algebras so(5) = sp(2) , su(3) , and g(2)
NASA Astrophysics Data System (ADS)
Vlasii, N. D.; von Rütte, F.; Wiese, U.-J.
2016-08-01
We develop in detail a graphical tensor product reduction scheme, first described by Antoine and Speiser, for the simple rank 2 Lie algebras so(5) = sp(2) , su(3) , and g(2) . This leads to an efficient practical method to reduce tensor products of irreducible representations into sums of such representations. For this purpose, the 2-dimensional weight diagram of a given representation is placed in a ;landscape; of irreducible representations. We provide both the landscapes and the weight diagrams for a large number of representations for the three simple rank 2 Lie algebras. We also apply the algebraic ;girdle; method, which is much less efficient for calculations by hand for moderately large representations. Computer code for reducing tensor products, based on the graphical method, has been developed as well and is available from the authors upon request.
Silva, Ana Rita; Pinho, Maria Salomé; Macedo, Luís; Souchay, Céline; Moulin, Christopher
2017-06-01
There is a debate about the ability of patients with Alzheimer's disease to build an up-to-date representation of their memory function, which has been termed mnemonic anosognosia. This form of anosognosia is typified by accurate online evaluations of performance, but dysfunctional or outmoded representations of function more generally. We tested whether people with Alzheimer's disease could adapt or change their representations of memory performance across three different six-week memory training programs using global judgements of learning. We showed that whereas online assessments of performance were accurate, patients continued to make inaccurate overestimations of their memory performance. This was despite the fact that the magnitude of predictions shifted according to the memory training. That is, on some level patients showed an ability to change and retain a representation of performance over time, but it was a dysfunctional one. For the first time in the literature we were able to use an analysis using correlations to support this claim, based on a large heterogeneous sample of 51 patients with Alzheimer's disease. The results point not to a failure to retain online metamemory information, but rather that this information is never used or incorporated into longer term representations, supporting but refining the mnemonic anosognosia hypothesis.
Perceived face size in healthy adults.
D'Amour, Sarah; Harris, Laurence R
2017-01-01
Perceptual body size distortions have traditionally been studied using subjective, qualitative measures that assess only one type of body representation-the conscious body image. Previous research on perceived body size has typically focused on measuring distortions of the entire body and has tended to overlook the face. Here, we present a novel psychophysical method for determining perceived body size that taps into implicit body representation. Using a two-alternative forced choice (2AFC), participants were sequentially shown two life-size images of their own face, viewed upright, upside down, or tilted 90°. In one interval, the width or length dimension was varied, while the other interval contained an undistorted image. Participants reported which image most closely matched their own face. An adaptive staircase adjusted the distorted image to hone in on the image that was equally likely to be judged as matching their perceived face as the accurate image. When viewed upright or upside down, face width was overestimated and length underestimated, whereas perception was accurate for the on-side views. These results provide the first psychophysically robust measurements of how accurately healthy participants perceive the size of their face, revealing distortions of the implicit body representation independent of the conscious body image.
DeWall, Ryan J.; Varghese, Tomy
2013-01-01
Thermal ablation procedures are commonly used to treat hepatic cancers and accurate ablation representation on shear wave velocity images is crucial to ensure complete treatment of the malignant target. Electrode vibration elastography is a shear wave imaging technique recently developed to monitor thermal ablation extent during treatment procedures. Previous work has shown good lateral boundary delineation of ablated volumes, but axial delineation was more ambiguous, which may have resulted from the assumption of lateral shear wave propagation. In this work, we assume both lateral and axial wave propagation and compare wave velocity images to those assuming only lateral shear wave propagation in finite element simulations, tissue-mimicking phantoms, and bovine liver tissue. Our results show that assuming bidirectional wave propagation minimizes artifacts above and below ablated volumes, yielding a more accurate representation of the ablated region on shear wave velocity images. Area overestimation was reduced from 13.4% to 3.6% in a stiff-inclusion tissue-mimicking phantom and from 9.1% to 0.8% in a radio-frequency ablation in bovine liver tissue. More accurate ablation representation during ablation procedures increases the likelihood of complete treatment of the malignant target, decreasing tumor recurrence. PMID:22293748
DeWall, Ryan J; Varghese, Tomy
2012-01-01
Thermal ablation procedures are commonly used to treat hepatic cancers and accurate ablation representation on shear wave velocity images is crucial to ensure complete treatment of the malignant target. Electrode vibration elastography is a shear wave imaging technique recently developed to monitor thermal ablation extent during treatment procedures. Previous work has shown good lateral boundary delineation of ablated volumes, but axial delineation was more ambiguous, which may have resulted from the assumption of lateral shear wave propagation. In this work, we assume both lateral and axial wave propagation and compare wave velocity images to those assuming only lateral shear wave propagation in finite element simulations, tissue-mimicking phantoms, and bovine liver tissue. Our results show that assuming bidirectional wave propagation minimizes artifacts above and below ablated volumes, yielding a more accurate representation of the ablated region on shear wave velocity images. Area overestimation was reduced from 13.4% to 3.6% in a stiff-inclusion tissue-mimicking phantom and from 9.1% to 0.8% in a radio-frequency ablation in bovine liver tissue. More accurate ablation representation during ablation procedures increases the likelihood of complete treatment of the malignant target, decreasing tumor recurrence. © 2012 IEEE
Binary-space-partitioned images for resolving image-based visibility.
Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J
2004-01-01
We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.
Bradley, Ian M; Pinto, Ameet J; Guest, Jeremy S
2016-10-01
The use of high-throughput sequencing technologies with the 16S rRNA gene for characterization of bacterial and archaeal communities has become routine. However, the adoption of sequencing methods for eukaryotes has been slow, despite their significance to natural and engineered systems. There are large variations among the target genes used for amplicon sequencing, and for the 18S rRNA gene, there is no consensus on which hypervariable region provides the most suitable representation of diversity. Additionally, it is unclear how much PCR/sequencing bias affects the depiction of community structure using current primers. The present study amplified the V4 and V8-V9 regions from seven microalgal mock communities as well as eukaryotic communities from freshwater, coastal, and wastewater samples to examine the effect of PCR/sequencing bias on community structure and membership. We found that degeneracies on the 3' end of the current V4-specific primers impact read length and mean relative abundance. Furthermore, the PCR/sequencing error is markedly higher for GC-rich members than for communities with balanced GC content. Importantly, the V4 region failed to reliably capture 2 of the 12 mock community members, and the V8-V9 hypervariable region more accurately represents mean relative abundance and alpha and beta diversity. Overall, the V4 and V8-V9 regions show similar community representations over freshwater, coastal, and wastewater environments, but specific samples show markedly different communities. These results indicate that multiple primer sets may be advantageous for gaining a more complete understanding of community structure and highlight the importance of including mock communities composed of species of interest. The quantification of error associated with community representation by amplicon sequencing is a critical challenge that is often ignored. When target genes are amplified using currently available primers, differential amplification efficiencies result in inaccurate estimates of community structure. The extent to which amplification bias affects community representation and the accuracy with which different gene targets represent community structure are not known. As a result, there is no consensus on which region provides the most suitable representation of diversity for eukaryotes. This study determined the accuracy with which commonly used 18S rRNA gene primer sets represent community structure and identified particular biases related to PCR amplification and Illumina MiSeq sequencing in order to more accurately study eukaryotic microbial communities. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Robust and efficient anomaly detection using heterogeneous representations
NASA Astrophysics Data System (ADS)
Hu, Xing; Hu, Shiqiang; Xie, Jinhua; Zheng, Shiyou
2015-05-01
Various approaches have been proposed for video anomaly detection. Yet these approaches typically suffer from one or more limitations: they often characterize the pattern using its internal information, but ignore its external relationship which is important for local anomaly detection. Moreover, the high-dimensionality and the lack of robustness of pattern representation may lead to problems, including overfitting, increased computational cost and memory requirements, and high false alarm rate. We propose a video anomaly detection framework which relies on a heterogeneous representation to account for both the pattern's internal information and external relationship. The internal information is characterized by slow features learned by slow feature analysis from low-level representations, and the external relationship is characterized by the spatial contextual distances. The heterogeneous representation is compact, robust, efficient, and discriminative for anomaly detection. Moreover, both the pattern's internal information and external relationship can be taken into account in the proposed framework. Extensive experiments demonstrate the robustness and efficiency of our approach by comparison with the state-of-the-art approaches on the widely used benchmark datasets.
ERIC Educational Resources Information Center
German, Tim P.; Hehman, Jessica A.
2006-01-01
Effective belief-desire reasoning requires both specialized representational capacities--the capacity to represent the mental states as such--as well as executive selection processes for accurate performance on tasks requiring the prediction and explanation of the actions of social agents. Compromised belief-desire reasoning in a given population…
USDA-ARS?s Scientific Manuscript database
It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...
Three-dimensional model-based object recognition and segmentation in cluttered scenes.
Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn
2006-10-01
Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.
NASA Astrophysics Data System (ADS)
Abbasi, Ashkan; Monadjemi, Amirhassan; Fang, Leyuan; Rabbani, Hossein
2018-03-01
We present a nonlocal weighted sparse representation (NWSR) method for reconstruction of retinal optical coherence tomography (OCT) images. To reconstruct a high signal-to-noise ratio and high-resolution OCT images, utilization of efficient denoising and interpolation algorithms are necessary, especially when the original data were subsampled during acquisition. However, the OCT images suffer from the presence of a high level of noise, which makes the estimation of sparse representations a difficult task. Thus, the proposed NWSR method merges sparse representations of multiple similar noisy and denoised patches to better estimate a sparse representation for each patch. First, the sparse representation of each patch is independently computed over an overcomplete dictionary, and then a nonlocal weighted sparse coefficient is computed by averaging representations of similar patches. Since the sparsity can reveal relevant information from noisy patches, combining noisy and denoised patches' representations is beneficial to obtain a more robust estimate of the unknown sparse representation. The denoised patches are obtained by applying an off-the-shelf image denoising method and our method provides an efficient way to exploit information from noisy and denoised patches' representations. The experimental results on denoising and interpolation of spectral domain OCT images demonstrated the effectiveness of the proposed NWSR method over existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Gruyters, Willem; Verboven, Pieter; Rogge, Seppe; Vanmaercke, Simon; Ramon, Herman; Nicolai, Bart
2017-10-01
Freshly harvested horticultural produce require a proper temperature management to maintain their high economic value. Towards this end, low temperature storage is of crucial importance to maintain a high product quality. Optimizing both the package design of packed produce and the different steps in the postharvest cold chain can be achieved by numerical modelling of the relevant transport phenomena. This work presents a novel methodology to accurately model both the random filling of produce in a package and the subsequent cooling process. First, a cultivar-specific database of more than 100 realistic CAD models of apple and pear fruit is built with a validated geometrical 3D shape model generator. To have an accurate representation of a realistic picking season, the model generator also takes into account the biological variability of the produce shape. Next, a discrete element model (DEM) randomly chooses surface meshed bodies from the database to simulate the gravitational filling process of produce in a box or bin, using actual mechanical properties of the fruit. A computational fluid dynamics (CFD) model is then developed with the final stacking arrangement of the produce to study the cooling efficiency of packages under several conditions and configurations. Here, a typical precooling operation is simulated to demonstrate the large differences between using actual 3D shapes of the fruit and an equivalent spheres approach that simplifies the problem drastically. From this study, it is concluded that using a simplified representation of the actual fruit shape may lead to a severe overestimation of the cooling behaviour.
Wen, Liewei; Yang, Sihua; Zhong, Junping; Zhou, Quan; Xing, Da
2017-01-01
Multifunctional nanoparticle-mediated imaging and therapeutic techniques are promising modalities for accurate localization and targeted treatment of cancer in clinical settings. Thermoacoustic (TA) imaging is highly sensitive to detect the distribution of water, ions or specific nanoprobes and provides excellent resolution, good contrast and superior tissue penetrability. TA therapy is a potential non-invasive approach for the treatment of deep-seated tumors. In this study, human serum albumin (HSA)-functionalized superparamagnetic iron oxide nanoparticle (HSA-SPIO) is used as a multifunctional nanoprobe with clinical application potential for MRI, TA imaging and treatment of tumor. In addition to be a MRI contrast agent for tumor localization, HSA-SPIO can absorb pulsed microwave energy and transform it into shockwave via the thermoelastic effect. Thereby, the reconstructed TA image by detecting TA signal is expected to be a sensitive and accurate representation of the HSA-SPIO accumulation in tumor. More importantly, owing to the selective retention of HSA-SPIO in tumor tissues and strong TA shockwave at the cellular level, HSA-SPIO induced TA effect under microwave-pulse radiation can be used to highly-efficiently kill cancer cells and inhibit tumor growth. Furthermore, ultra-short pulsed microwave with high excitation efficiency and deep penetrability in biological tissues makes TA therapy a highly-efficient anti-tumor modality on the versatile platform. Overall, HSA-SPIO mediated MRI and TA imaging would offer more comprehensive diagnostic information and enable dynamic visualization of nanoagents in the tumorous tissue thereby tumor-targeted therapy. PMID:28638483
Wen, Liewei; Yang, Sihua; Zhong, Junping; Zhou, Quan; Xing, Da
2017-01-01
Multifunctional nanoparticle-mediated imaging and therapeutic techniques are promising modalities for accurate localization and targeted treatment of cancer in clinical settings. Thermoacoustic (TA) imaging is highly sensitive to detect the distribution of water, ions or specific nanoprobes and provides excellent resolution, good contrast and superior tissue penetrability. TA therapy is a potential non-invasive approach for the treatment of deep-seated tumors. In this study, human serum albumin (HSA)-functionalized superparamagnetic iron oxide nanoparticle (HSA-SPIO) is used as a multifunctional nanoprobe with clinical application potential for MRI, TA imaging and treatment of tumor. In addition to be a MRI contrast agent for tumor localization, HSA-SPIO can absorb pulsed microwave energy and transform it into shockwave via the thermoelastic effect. Thereby, the reconstructed TA image by detecting TA signal is expected to be a sensitive and accurate representation of the HSA-SPIO accumulation in tumor. More importantly, owing to the selective retention of HSA-SPIO in tumor tissues and strong TA shockwave at the cellular level, HSA-SPIO induced TA effect under microwave-pulse radiation can be used to highly-efficiently kill cancer cells and inhibit tumor growth. Furthermore, ultra-short pulsed microwave with high excitation efficiency and deep penetrability in biological tissues makes TA therapy a highly-efficient anti-tumor modality on the versatile platform. Overall, HSA-SPIO mediated MRI and TA imaging would offer more comprehensive diagnostic information and enable dynamic visualization of nanoagents in the tumorous tissue thereby tumor-targeted therapy.
Energy dissipation in the blade tip region of an axial fan
NASA Astrophysics Data System (ADS)
Bizjan, B.; Milavec, M.; Širok, B.; Trenc, F.; Hočevar, M.
2016-11-01
A study of velocity and pressure fluctuations in the tip clearance flow of an axial fan is presented in this paper. Two different rotor blade tip designs were investigated: the standard one with straight blade tips and the modified one with swept-back tip winglets. Comparison of integral sound parameters indicates a significant noise level reduction for the modified blade tip design. To study the underlying mechanisms of the energy conversion and noise generation, a novel experimental method based on simultaneous measurements of local flow velocity and pressure has also been developed and is presented here. The method is based on the phase space analysis by the use of attractors, which enable more accurate identification and determination of the local flow structures and turbulent flow properties. Specific gap flow energy derived from the pressure and velocity time series was introduced as an additional attractor parameter to assess the flow energy distribution and dissipation within the phase space, and thus determines characteristic sources of the fan acoustic emission. The attractors reveal a more efficient conversion of the pressure to kinetic flow energy in the case of the modified (tip winglet) fan blade design, and also a reduction in emitted noise levels. The findings of the attractor analysis are in a good agreement with integral fan characteristics (efficiency and noise level), while offering a much more accurate and detailed representation of gap flow phenomena.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Modeling cotton (Gossypium spp) leaves and canopy using computer aided geometric design (CAGD)
USDA-ARS?s Scientific Manuscript database
The goal of this research is to develop a geometrically accurate model of cotton crop canopies for exploring changes in canopy microenvironment and physiological function with leaf structure. We develop an accurate representation of the leaves, including changes in three-dimensional folding and orie...
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
An exploratory study of cognitive load in diagnosing patient conditions.
Workman, Michael; Lesser, Michael F; Kim, Joonmin
2007-06-01
To determine whether the ways in which information is presented to physicians will improve their ability to respond in a timely and accurate manner to acute care needs. The forms of the presentation compared traditional textual, chart and graph representations with equivalent symbolic language representations. To test this objective, our investigation involved two studies of interpreting patient conditions using two forms of information representation. The first assessed the level of cognitive effort (the outcome variable is known as cognitive load), and the second assessed the time and accuracy outcome variables. Our investigation consisted of two studies, the first study involved 3rd and 4th year medical students, and the second study involved three board certified physicians who worked in an intensive care unit of a metropolitan hospital. The first study utilized an all-within-subject design with repeated measures, where pretests were utilized as control covariate for prior learning and individual differences. The second study utilized a random sampling of records analyzed by two physicians and qualitatively evaluated by board-certified intensivists. The first study indicated that the cognitive load to interpret the symbolic representation was less than those presented in the more traditional textual, chart and graphic form. The second study suggests that experienced physicians may react in a more timely fashion with at least the same accuracy when the symbolic language was used than with traditional charts and graphs. The ways in which information is presented to physicians may affect the quality of acute care, such as in intensive, critical and emergency care units. When information can be presented in symbolic form, it may be cognitively processed more efficiently than when it is presented in the usual textual and chart form, potentially lowering errors in diagnosis and increasing the responsiveness to patient conditions.
Visual management of large scale data mining projects.
Shah, I; Hunter, L
2000-01-01
This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.
Scaling and efficiency of PRISM in adaptive simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonse, Shaheen R.; Bell, J.B.; Brown, N.J.
1999-12-01
The dominant computational cost in modeling turbulent combustion phenomena numerically with high fidelity chemical mechanisms is the time required to solve the ordinary differential equations associated with chemical kinetics. One approach to reducing that computational cost is to develop an inexpensive surrogate model that accurately represents evolution of chemical kinetics. One such approach, PRISM, develops a polynomial representation of the chemistry evolution in a local region of chemical composition space. This representation is then stored for later use. As the computation proceeds, the chemistry evolution for other points within the same region are computed by evaluating these polynomials instead ofmore » calling an ordinary differential equation solver. If initial data for advancing the chemistry is encountered that is not in any region for which a polynomial is defined, the methodology dynamically samples that region and constructs a new representation for that region. The utility of this approach is determined by the size of the regions over which the representation provides a good approximation to the kinetics and the number of these regions that are necessary to model the subset of composition space that is active during a simulation. In this paper, we assess the PRISM methodology in the context of a turbulent premixed flame in two dimensions. We consider a range of turbulent intensities ranging from weak turbulence that has little effect on the flame to strong turbulence that tears pockets of burning fluid from the main flame. For each case, we explore a range of sizes for the local regions and determine the scaling behavior as a function of region size and turbulent intensity.« less
Integrated Multiscale Modeling of Molecular Computing Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less
High-order nonlinear susceptibilities of He
NASA Astrophysics Data System (ADS)
Liu, W.-C.; Clark, Charles W.
1996-05-01
High-order nonlinear optical response of noble gases to intense laser radiation is of considerable experimental interest, but is difficult to measure or calculate accurately. We have begun a set of calculations of frequency-dependent nonlinear susceptibilities of He 1s^2, within the framework of Rayleigh-Schrödinger perturbation theory at lowest applicable order, with the goal of providing critically evaluated atomic data for modelling high harmonic generation processes. The atomic Hamiltonian is decomposed in term of Hylleraas coordinates and spherical harmonics using the formalism of Pont and Shakeshaft (M. Pont and R. Shakeshaft, Phy. Rev. A 51), 257 (1995), and the hierarchy of inhomogeneous equations of perturbation theory is solved iteratively. A combination of Hylleraas and Frankowski basis functions is used(J. D. Baker, Master thesis, U. Delaware (1988); J. D. Baker, R. N. Hill, and J. D. Morgan, AIP Conference Proceedings 189) 123(1989); the compact Hylleraas basis provides a highly accurate representation of the ground state wavefunction, whereas the diffuse Frankowski basis functions efficiently reproduce the correct asymptotic structure of the perturbed orbitals.
Age Differences in the Effects of Conscious and Unconscious Thought in Decision Making
Queen, Tara L.; Hess, Thomas M.
2010-01-01
The roles of unconscious and conscious thought in decision making were investigated to examine both (a) boundary conditions associated with the efficacy of each type of thought and (b) age differences in intuitive versus deliberative thought. Participants were presented with two decision tasks, one requiring active deliberation and the other intuitive processing. Younger and older adults then engaged in conscious or unconscious thought processing before making a decision. A manipulation check revealed that younger adults were more accurate in their representations of the decision material than older adults, which accounted for much of the age-related variation in performance when the full sample was considered. When only considering accurate participants, decision making was best when there was congruence between the nature of the information and the thought condition. Thus, unconscious thought was more appropriate when the decision relied on intuitive rather than deliberative processing, whereas the converse was true with conscious thought. Although older adults displayed somewhat less efficient deliberative processing, their ability to process information at the intuitive level was relatively preserved. Additionally, both young and older adults displayed choice-supportive memory. PMID:20545411
A system for saccular intracranial aneurysm analysis and virtual stent planning
NASA Astrophysics Data System (ADS)
Baloch, Sajjad; Sudarsky, Sandra; Zhu, Ying; Mohamed, Ashraf; Geiger, Berhard; Dutta, Komal; Namburu, Durga; Nias, Puthenveettil; Martucci, Gary; Redel, Thomas
2012-02-01
Recent studies have found correlation between the risk of rupture of saccular aneurysms and their morphological characteristics, such as volume, surface area, neck length, among others. For reliably exploiting these parameters in endovascular treatment planning, it is crucial that they are accurately quantified. In this paper, we present a novel framework to assist physicians in accurately assessing saccular aneurysms and efficiently planning for endovascular intervention. The approach consists of automatically segmenting the pathological vessel, followed by the construction of its surface representation. The aneurysm is then separated from the vessel surface through a graph-cut based algorithm that is driven by local geometry as well as strong prior information. The corresponding healthy vessel is subsequently reconstructed and measurements representing the patient-specific geometric parameters of pathological vessel are computed. To better support clinical decisions on stenting and device type selection, the placement of virtual stent is eventually carried out in conformity with the shape of the diseased vessel using the patient-specific measurements. We have implemented the proposed methodology as a fully functional system, and extensively tested it with phantom and real datasets.
A limit-cycle self-organizing map architecture for stable arm control.
Huang, Di-Wei; Gentili, Rodolphe J; Katz, Garrett E; Reggia, James A
2017-01-01
Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers. Copyright © 2016 Elsevier Ltd. All rights reserved.
Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.
Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice
2015-09-04
The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.
ERIC Educational Resources Information Center
De Bock, Dirk; Neyens, Deborah; Van Dooren, Wim
2017-01-01
Recent research on the phenomenon of improper proportional reasoning focused on students' understanding of elementary functions and their external representations. So far, the role of basic function properties in students' concept images of functions remained unclear. We add to this research line by investigating how accurate students are in…
Extension of perceived arm length following tool-use: clues to plasticity of body metrics.
Sposito, Ambra; Bolognini, Nadia; Vallar, Giuseppe; Maravita, Angelo
2012-07-01
Humans hold a very accurate representation of the metrics of their body parts. Recent evidence shows that the spatial estimation of body parts length, as assessed through a bisection task, is even more accurate than that of non-corporeal extrapersonal objects (Sposito, Bolognini, Vallar, Posteraro, & Maravita (2009)). In the present paper we show that human participants estimate the mid-point of their forearm, which was kept in a radial posture, to be more distal following a 15-min training with a 60 cm-long tool as compared to pre tool-use. This outcome is compatible with an increased representation of the participants' forearm length. Control experiments show that this result was not due to a mere distal proprioceptive shift induced by tool-use, and was not replicated following the use of a 20 cm-long, functionally irrelevant tool. These results strongly support the view that, although the inner knowledge of one's own body metrics appears to be one of the more stable features of body representation, body-space interactions requiring the use of tools that extend the natural range of action, entail measurable dynamic changes in the representation of body metrics. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI
Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel
2012-01-01
Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yang; Leung, L. Ruby; Fan, Jiwen
This is a collaborative project among North Carolina State University, Pacific Northwest National Laboratory, and Scripps Institution of Oceanography, University of California at San Diego to address the critical need for an accurate representation of aerosol indirect effect in climate and Earth system models. In this project, we propose to develop and improve parameterizations of aerosol-cloud-precipitation feedbacks in climate models and apply them to study the effect of aerosols and clouds on radiation and hydrologic cycle. Our overall objective is to develop, improve, and evaluate parameterizations to enable more accurate simulations of these feedbacks in high resolution regional and globalmore » climate models.« less
North Pacific Mesoscale Coupled Air-Ocean Simulations Compared with Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerovecki, Ivana; McClean, Julie; Koracin, Darko
2014-11-14
The overall objective of this study was to improve the representation of regional ocean circulation in the North Pacific by using high resolution atmospheric forcing that accurately represents mesoscale processes in ocean-atmosphere regional (North Pacific) model configuration. The goal was to assess the importance of accurate representation of mesoscale processes in the atmosphere and the ocean on large scale circulation. This is an important question, as mesoscale processes in the atmosphere which are resolved by the high resolution mesoscale atmospheric models such as Weather Research and Forecasting (WRF), are absent in commonly used atmospheric forcing such as CORE forcing, employedmore » in e.g. the Community Climate System Model (CCSM).« less
Efficient exact motif discovery.
Marschall, Tobias; Rahmann, Sven
2009-06-15
The motif discovery problem consists of finding over-represented patterns in a collection of biosequences. It is one of the classical sequence analysis problems, but still has not been satisfactorily solved in an exact and efficient manner. This is partly due to the large number of possibilities of defining the motif search space and the notion of over-representation. Even for well-defined formalizations, the problem is frequently solved in an ad hoc manner with heuristics that do not guarantee to find the best motif. We show how to solve the motif discovery problem (almost) exactly on a practically relevant space of IUPAC generalized string patterns, using the p-value with respect to an i.i.d. model or a Markov model as the measure of over-representation. In particular, (i) we use a highly accurate compound Poisson approximation for the null distribution of the number of motif occurrences. We show how to compute the exact clump size distribution using a recently introduced device called probabilistic arithmetic automaton (PAA). (ii) We define two p-value scores for over-representation, the first one based on the total number of motif occurrences, the second one based on the number of sequences in a collection with at least one occurrence. (iii) We describe an algorithm to discover the optimal pattern with respect to either of the scores. The method exploits monotonicity properties of the compound Poisson approximation and is by orders of magnitude faster than exhaustive enumeration of IUPAC strings (11.8 h compared with an extrapolated runtime of 4.8 years). (iv) We justify the use of the proposed scores for motif discovery by showing our method to outperform other motif discovery algorithms (e.g. MEME, Weeder) on benchmark datasets. We also propose new motifs on Mycobacterium tuberculosis. The method has been implemented in Java. It can be obtained from http://ls11-www.cs.tu-dortmund.de/people/marschal/paa_md/.
A Monte Carlo method using octree structure in photon and electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, K.; Maeda, S.
Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that withmore » electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting.« less
Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J
2013-09-01
Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhiqiang; Chen, Jun; University of Chinese Academy of Sciences, Beijing 100049
Full quantum mechanical calculations of vibrational energies of methane and fluoromethane are carried out using a polyspherical description combining Radau and Jacobi coordinates. The Hamiltonian is built in a potential-optimized discrete variable representation, and vibrational energies are solved using an iterative eigensolver. This new approach can be applied to a large variety of molecules. In particular, we show that it is able to accurately and efficiently compute eigenstates for four different molecules : CH{sub 4}, CHD{sub 3}, CH{sub 2}D{sub 2}, and CH{sub 3}F. Very good agreement is obtained with the results reported previously in the literature with different approaches andmore » with experimental data.« less
Marmarelis, Vasilis Z.; Berger, Theodore W.
2009-01-01
Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609
Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies.
Hansen, Katja; Montavon, Grégoire; Biegler, Franziska; Fazli, Siamac; Rupp, Matthias; Scheffler, Matthias; von Lilienfeld, O Anatole; Tkatchenko, Alexandre; Müller, Klaus-Robert
2013-08-13
The accurate and reliable prediction of properties of molecules typically requires computationally intensive quantum-chemical calculations. Recently, machine learning techniques applied to ab initio calculations have been proposed as an efficient approach for describing the energies of molecules in their given ground-state structure throughout chemical compound space (Rupp et al. Phys. Rev. Lett. 2012, 108, 058301). In this paper we outline a number of established machine learning techniques and investigate the influence of the molecular representation on the methods performance. The best methods achieve prediction errors of 3 kcal/mol for the atomization energies of a wide variety of molecules. Rationales for this performance improvement are given together with pitfalls and challenges when applying machine learning approaches to the prediction of quantum-mechanical observables.
Tang, Guoping; Yuan, Fengming; Bisht, Gautam; ...
2016-01-01
Reactive transport codes (e.g., PFLOTRAN) are increasingly used to improve the representation of biogeochemical processes in terrestrial ecosystem models (e.g., the Community Land Model, CLM). As CLM and PFLOTRAN use explicit and implicit time stepping, implementation of CLM biogeochemical reactions in PFLOTRAN can result in negative concentration, which is not physical and can cause numerical instability and errors. The objective of this work is to address the nonnegativity challenge to obtain accurate, efficient, and robust solutions. We illustrate the implementation of a reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant nitrogen uptake reactions and test the implementation atmore » arctic, temperate, and tropical sites. We examine use of scaling back the update during each iteration (SU), log transformation (LT), and downregulating the reaction rate to account for reactant availability limitation to enforce nonnegativity. Both SU and LT guarantee nonnegativity but with implications. When a very small scaling factor occurs due to either consumption or numerical overshoot, and the iterations are deemed converged because of too small an update, SU can introduce excessive numerical error. LT involves multiplication of the Jacobian matrix by the concentration vector, which increases the condition number, decreases the time step size, and increases the computational cost. Neither SU nor SE prevents zero concentration. When the concentration is close to machine precision or 0, a small positive update stops all reactions for SU, and LT can fail due to a singular Jacobian matrix. The consumption rate has to be downregulated such that the solution to the mathematical representation is positive. A first-order rate downregulates consumption and is nonnegative, and adding a residual concentration makes it positive. For zero-order rate or when the reaction rate is not a function of a reactant, representing the availability limitation of each reactant with a Monod substrate limiting function provides a smooth transition between a zero-order rate when the reactant is abundant and first-order rate when the reactant becomes limiting. When the half saturation is small, marching through the transition may require small time step sizes to resolve the sharp change within a small range of concentration values. Our results from simple tests and CLM-PFLOTRAN simulations caution against use of SU and indicate that accurate, stable, and relatively efficient solutions can be achieved with LT and downregulation with Monod substrate limiting function and residual concentration.« less
Miller, Thomas F.
2017-01-01
We present a coarse-grained simulation model that is capable of simulating the minute-timescale dynamics of protein translocation and membrane integration via the Sec translocon, while retaining sufficient chemical and structural detail to capture many of the sequence-specific interactions that drive these processes. The model includes accurate geometric representations of the ribosome and Sec translocon, obtained directly from experimental structures, and interactions parameterized from nearly 200 μs of residue-based coarse-grained molecular dynamics simulations. A protocol for mapping amino-acid sequences to coarse-grained beads enables the direct simulation of trajectories for the co-translational insertion of arbitrary polypeptide sequences into the Sec translocon. The model reproduces experimentally observed features of membrane protein integration, including the efficiency with which polypeptide domains integrate into the membrane, the variation in integration efficiency upon single amino-acid mutations, and the orientation of transmembrane domains. The central advantage of the model is that it connects sequence-level protein features to biological observables and timescales, enabling direct simulation for the mechanistic analysis of co-translational integration and for the engineering of membrane proteins with enhanced membrane integration efficiency. PMID:28328943
Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation
ERIC Educational Resources Information Center
Schroeder, Sascha; Richter, Tobias; Hoever, Inga
2008-01-01
Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…
An Accurate Projector Calibration Method Based on Polynomial Distortion Representation
Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua
2015-01-01
In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247
Staudacher, Erich M.; Huetteroth, Wolf; Schachtner, Joachim; Daly, Kevin C.
2009-01-01
A central problem facing studies of neural encoding in sensory systems is how to accurately quantify the extent of spatial and temporal responses. In this study, we take advantage of the relatively simple and stereotypic neural architecture found in invertebrates. We combine standard electrophysiological techniques, recently developed population analysis techniques, and novel anatomical methods to form an innovative 4-dimensional view of odor output representations in the antennal lobe of the moth Manduca sexta. This novel approach allows quantification of olfactory responses of characterized neurons with spike time resolution. Additionally, arbitrary integration windows can be used for comparisons with other methods such as imaging. By assigning statistical significance to changes in neuronal firing, this method can visualize activity across the entire antennal lobe. The resulting 4-dimensional representation of antennal lobe output complements imaging and multi-unit experiments yet provides a more comprehensive and accurate view of glomerular activation patterns in spike time resolution. PMID:19464513
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
Towards a multilevel cognitive probabilistic representation of space
NASA Astrophysics Data System (ADS)
Tapus, Adriana; Vasudevan, Shrihari; Siegwart, Roland
2005-03-01
This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.
A roadmap for improving the representation of photosynthesis in Earth system models.
Rogers, Alistair; Medlyn, Belinda E; Dukes, Jeffrey S; Bonan, Gordon; von Caemmerer, Susanne; Dietze, Michael C; Kattge, Jens; Leakey, Andrew D B; Mercado, Lina M; Niinemets, Ülo; Prentice, I Colin; Serbin, Shawn P; Sitch, Stephen; Way, Danielle A; Zaehle, Sönke
2017-01-01
Accurate representation of photosynthesis in terrestrial biosphere models (TBMs) is essential for robust projections of global change. However, current representations vary markedly between TBMs, contributing uncertainty to projections of global carbon fluxes. Here we compared the representation of photosynthesis in seven TBMs by examining leaf and canopy level responses of photosynthetic CO 2 assimilation (A) to key environmental variables: light, temperature, CO 2 concentration, vapor pressure deficit and soil water content. We identified research areas where limited process knowledge prevents inclusion of physiological phenomena in current TBMs and research areas where data are urgently needed for model parameterization or evaluation. We provide a roadmap for new science needed to improve the representation of photosynthesis in the next generation of terrestrial biosphere and Earth system models. No claim to original US Government works New Phytologist © 2016 New Phytologist Trust.
Dependency-based Siamese long short-term memory network for learning sentence representations
Zhu, Wenhao; Ni, Jianyue; Wei, Baogang; Lu, Zhiguo
2018-01-01
Textual representations play an important role in the field of natural language processing (NLP). The efficiency of NLP tasks, such as text comprehension and information extraction, can be significantly improved with proper textual representations. As neural networks are gradually applied to learn the representation of words and phrases, fairly efficient models of learning short text representations have been developed, such as the continuous bag of words (CBOW) and skip-gram models, and they have been extensively employed in a variety of NLP tasks. Because of the complex structure generated by the longer text lengths, such as sentences, algorithms appropriate for learning short textual representations are not applicable for learning long textual representations. One method of learning long textual representations is the Long Short-Term Memory (LSTM) network, which is suitable for processing sequences. However, the standard LSTM does not adequately address the primary sentence structure (subject, predicate and object), which is an important factor for producing appropriate sentence representations. To resolve this issue, this paper proposes the dependency-based LSTM model (D-LSTM). The D-LSTM divides a sentence representation into two parts: a basic component and a supporting component. The D-LSTM uses a pre-trained dependency parser to obtain the primary sentence information and generate supporting components, and it also uses a standard LSTM model to generate the basic sentence components. A weight factor that can adjust the ratio of the basic and supporting components in a sentence is introduced to generate the sentence representation. Compared with the representation learned by the standard LSTM, the sentence representation learned by the D-LSTM contains a greater amount of useful information. The experimental results show that the D-LSTM is superior to the standard LSTM for sentences involving compositional knowledge (SICK) data. PMID:29513748
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1993-01-01
Distributed Point Charge Models (PCM) for CO, (H2O)2, and HS-SH molecules have been computed from analytical expressions using multi-center multipole moments. The point charges (set of charges including both atomic and non-atomic positions) exactly reproduce both molecular and segmental multipole moments, thus constituting an accurate representation of the local anisotropy of electrostatic properties. In contrast to other known point charge models, PCM can be used to calculate not only intermolecular, but also intramolecular interactions. Comparison of these results with more accurate calculations demonstrated that PCM can correctly represent both weak and strong (intramolecular) interactions, thus indicating the merit of extending PCM to obtain improved potentials for molecular mechanics and molecular dynamics computational methods.
ERIC Educational Resources Information Center
Alamillo, Laura
2007-01-01
Before the Civil Rights movement, the lack of accurate representations of people of color was evident. Children's literature did not present accurate depictions of Mexican-Americans in the text. Sarapes, sombreros and fiestas were typical symbols used to identify Mexican culture and traditions. The Civil Rights Movement sparked a change for…
ERIC Educational Resources Information Center
Ceci, Stephen J.; Fitneva, Stanka A.; Williams, Wendy M.
2010-01-01
Traditional accounts of memory development suggest that maturation of prefrontal cortex (PFC) enables efficient metamemory, which enhances memory. An alternative theory is described, in which changes in early memory and metamemory are mediated by representational changes, independent of PFC maturation. In a pilot study and Experiment 1, younger…
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.
Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2017-02-01
Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier B.V. All rights reserved.
Sokolis, Dimitrios P; Sassani, Sofia G
2013-05-01
Other than its transport role, the large bowel performs numerous sophisticated functions, e.g. water, electrolyte, and vitamin absorption, optimized by its contractile properties and passive recoil capacity, but these properties have attracted limited attention than has been the case for other parts of the gastrointestinal tract. Accordingly, we investigated in vitro the pseudo-elastic properties of tubular specimens from the ascending, mid, and descending colon, and the rectum of healthy Wistar rats under passive quasi-static conditions and a physiologic range of pressures/axial stretches. A neo-Hookean and five-fiber family model was chosen as a microstructure-based material model for its efficiency in producing accurate representations of the three-dimensional inflation/extension data in relation to the underlying microstructure. Guided by our optical microscopy observations, this model took account of isotropic elastin properties and multi-directional collagen organization, but suffered from parameter covariance. Moreover, the contributions to the total model of the neo-Hookean and circumferential-fiber family were negligible, given the tiny amounts of elastin and circumferentially-arranged collagen fibers that were disclosed histologically, and the contributions of the diagonal and radial-fiber families to data representation were similar. The multiaxial response of the intestinal wall was fit equally accurately but without over-parameterization problems by the neo-Hookean and three-fiber (diagonal and axial) family model. The preferred alignment of collagen fibers towards the axial direction bestowed increased axial stiffness to the tissue. The mid colon was the stiffest region by virtue of its greatest material parameters, as validated by its higher collagen content than that of the distal regions. The present findings generate a more cohesive understanding of the large bowel in histomechanical terms, with potential for clinical and biomedical applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
Impact of the basic state and MJO representation on MJO Pacific teleconnections in GCMs
NASA Astrophysics Data System (ADS)
Henderson, S. A.; Maloney, E. D.; Son, S. W.
2017-12-01
Teleconnection patterns induced by the Madden-Julian Oscillation (MJO) are known to significantly alter extratropical weather and climate patterns. However, accurate MJO representation has been difficult for many General Circulation Models (GCMs). Furthermore, many GCMs contain large basic state biases. These issues present challenges to the simulation of MJO teleconnections and, in turn, their associated extratropical impacts. This study examines the impacts of basic state quality and MJO representation on the quality of MJO teleconnection patterns in GCMs from phase 5 of the Coupled Model Intercomparison Project (CMIP5). Results suggest that GCMs assessed to have a good MJO but with large basic state biases have similarly low skill in reproducing MJO teleconnections as GCMs with poor MJO representation. In the good MJO models examined, poor teleconnection quality is associated with large errors in the zonal extent of the Pacific subtropical jet. Whereas the horizontal structure of MJO heating in the Indo-Pacific region is found to have modest impacts on the teleconnection patterns, results suggest that MJO heating east of the dateline can alter the teleconnection pattern characteristics over North America. These findings suggest that in order to accurately simulate the MJO teleconnection patterns and associated extratropical impacts, both the MJO and the basic state must be well represented.
Students' Development of Representational Competence Through the Sense of Touch
NASA Astrophysics Data System (ADS)
Magana, Alejandra J.; Balachandran, Sadhana
2017-06-01
Electromagnetism is an umbrella encapsulating several different concepts like electric current, electric fields and forces, and magnetic fields and forces, among other topics. However, a number of studies in the past have highlighted the poor conceptual understanding of electromagnetism concepts by students even after instruction. This study aims to identify novel forms of "hands-on" instruction that can result in representational competence and conceptual gain. Specifically, this study aimed to identify if the use of visuohaptic simulations can have an effect on student representations of electromagnetic-related concepts. The guiding questions is How do visuohaptic simulations influence undergraduate students' representations of electric forces? Participants included nine undergraduate students from science, technology, or engineering backgrounds who participated in a think-aloud procedure while interacting with a visuohaptic simulation. The think-aloud procedure was divided in three stages, a prediction stage, a minimally visual haptic stage, and a visually enhanced haptic stage. The results of this study suggest that students' accurately characterized and represented the forces felt around a particle, line, and ring charges either in the prediction stage, a minimally visual haptic stage or the visually enhanced haptic stage. Also, some students accurately depicted the three-dimensional nature of the field for each configuration in the two stages that included a tactile mode, where the point charge was the most challenging one.
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Zambrano, Eduardo; Šulc, Miroslav; Vaníček, Jiří
2013-08-07
Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximation-with an amplitude correction derived from the phase-space propagator-and its efficiency-with an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.
The indexed time table approach for planning and acting
NASA Technical Reports Server (NTRS)
Ghallab, Malik; Alaoui, Amine Mounir
1989-01-01
A representation is discussed of symbolic temporal relations, called IxTeT, that is both powerful enough at the reasoning level for tasks such as plan generation, refinement and modification, and efficient enough for dealing with real time constraints in action monitoring and reactive planning. Such representation for dealing with time is needed in a teleoperated space robot. After a brief survey of known approaches, the proposed representation shows its computational efficiency for managing a large data base of temporal relations. Reactive planning with IxTeT is described and exemplified through the problem of mission planning and modification for a simple surveying satellite.
Drawing skill is related to the efficiency of encoding object structure.
Perdreau, Florian; Cavanagh, Patrick
2014-01-01
Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details.
Drawing skill is related to the efficiency of encoding object structure
Perdreau, Florian; Cavanagh, Patrick
2014-01-01
Accurate drawing calls on many skills beyond simple motor coordination. A good internal representation of the target object's structure is necessary to capture its proportion and shape in the drawing. Here, we assess two aspects of the perception of object structure and relate them to participants' drawing accuracy. First, we assessed drawing accuracy by computing the geometrical dissimilarity of their drawing to the target object. We then used two tasks to evaluate the efficiency of encoding object structure. First, to examine the rate of temporal encoding, we varied presentation duration of a possible versus impossible test object in the fovea using two different test sizes (8° and 28°). More skilled participants were faster at encoding an object's structure, but this difference was not affected by image size. A control experiment showed that participants skilled in drawing did not have a general advantage that might have explained their faster processing for object structure. Second, to measure the critical image size for accurate classification in the periphery, we varied image size with possible versus impossible object tests centered at two different eccentricities (3° and 8°). More skilled participants were able to categorise object structure at smaller sizes, and this advantage did not change with eccentricity. A control experiment showed that the result could not be attributed to differences in visual acuity, leaving attentional resolution as a possible explanation. Overall, we conclude that drawing accuracy is related to faster encoding of object structure and better access to crowded details. PMID:25469216
NASA Astrophysics Data System (ADS)
Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman
2013-07-01
In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.
REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang
2013-04-30
Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less
Situation exploration in a persistent surveillance system with multidimensional data
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.
2013-03-01
There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.
NASA Astrophysics Data System (ADS)
Shakib, Farnaz; Huo, Pengfei
Photo-induced proton-coupled electron transfer reactions (PCET) are at the heart of energy conversion reactions in photocatalysis. Here, we apply the recently developed ring-polymer surface-hopping (RPSH) approach to simulate the nonadiabatic dynamics of photo-induced PCET. The RPSH method incorporates ring-polymer (RP) quantization of the proton into the fewest-switches surface-hopping (FSSH) approach. Using two diabatic electronic states, corresponding to the electron donor and acceptor states, we model photo-induced PCET with the proton described by a classical isomorphism RP. From the RPSH method, we obtain numerical results that are comparable to those obtained when the proton is treated quantum mechanically. This accuracy stems from incorporating exact quantum statistics, such as proton tunnelling, into approximate quantum dynamics. Additionally, RPSH offers the numerical accuracy along with the computational efficiency. Namely, compared to the FSSH approach in vibronic representation, there is no need to calculate a massive number of vibronic states explicitly. This approach opens up the possibility to accurately and efficiently simulate photo-induced PCET with multiple transferring protons or electrons.
A review of AirQ Models and their applications for forecasting the air pollution health outcomes.
Oliveri Conti, Gea; Heibati, Behzad; Kloog, Itai; Fiore, Maria; Ferrante, Margherita
2017-03-01
Even though clean air is considered as a basic requirement for the maintenance of human health, air pollution continues to pose a significant health threat in developed and developing countries alike. Monitoring and modeling of classic and emerging pollutants is vital to our knowledge of health outcomes in exposed subjects and to our ability to predict them. The ability to anticipate and manage changes in atmospheric pollutant concentrations relies on an accurate representation of the chemical state of the atmosphere. The task of providing the best possible analysis of air pollution thus requires efficient computational tools enabling efficient integration of observational data into models. A number of air quality models have been developed and play an important role in air quality management. Even though a large number of air quality models have been discussed or applied, their heterogeneity makes it difficult to select one approach above the others. This paper provides a brief review on air quality models with respect to several aspects such as prediction of health effects.
A pertinent approach to solve nonlinear fuzzy integro-differential equations.
Narayanamoorthy, S; Sathiyapriya, S P
2016-01-01
Fuzzy integro-differential equations is one of the important parts of fuzzy analysis theory that holds theoretical as well as applicable values in analytical dynamics and so an appropriate computational algorithm to solve them is in essence. In this article, we use parametric forms of fuzzy numbers and suggest an applicable approach for solving nonlinear fuzzy integro-differential equations using homotopy perturbation method. A clear and detailed description of the proposed method is provided. Our main objective is to illustrate that the construction of appropriate convex homotopy in a proper way leads to highly accurate solutions with less computational work. The efficiency of the approximation technique is expressed via stability and convergence analysis so as to guarantee the efficiency and performance of the methodology. Numerical examples are demonstrated to verify the convergence and it reveals the validity of the presented numerical technique. Numerical results are tabulated and examined by comparing the obtained approximate solutions with the known exact solutions. Graphical representations of the exact and acquired approximate fuzzy solutions clarify the accuracy of the approach.
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2018-01-01
The metabolism of individual organisms and biological communities can be viewed as a network of metabolites connected to each other through chemical reactions. In metabolic networks, chemical reactions transform reactants into products, thereby transferring elements between these metabolites. Knowledge of how elements are transferred through reactant/product pairs allows for the identification of primary compound connections through a metabolic network. However, such information is not readily available and is often challenging to obtain for large reaction databases or genome-scale metabolic models. In this study, a new algorithm was developed for automatically predicting the element-transferring reactant/product pairs using the limited information available in the standard representation of metabolic networks. The algorithm demonstrated high efficiency in analyzing large datasets and provided accurate predictions when benchmarked with manually curated data. Applying the algorithm to the visualization of metabolic networks highlighted pathways of primary reactant/product connections and provided an organized view of element-transferring biochemical transformations. The algorithm was implemented as a new function in the open source software package PSAMM in the release v0.30 (https://zhanglab.github.io/psamm/).
NASA Astrophysics Data System (ADS)
Orimo, Yuki; Sato, Takeshi; Scrinzi, Armin; Ishikawa, Kenichi L.
2018-02-01
We present a numerical implementation of the infinite-range exterior complex scaling [Scrinzi, Phys. Rev. A 81, 053845 (2010), 10.1103/PhysRevA.81.053845] as an efficient absorbing boundary to the time-dependent complete-active-space self-consistent field method [Sato, Ishikawa, Březinová, Lackner, Nagele, and Burgdörfer, Phys. Rev. A 94, 023405 (2016), 10.1103/PhysRevA.94.023405] for multielectron atoms subject to an intense laser pulse. We introduce Gauss-Laguerre-Radau quadrature points to construct discrete variable representation basis functions in the last radial finite element extending to infinity. This implementation is applied to strong-field ionization and high-harmonic generation in He, Be, and Ne atoms. It efficiently prevents unphysical reflection of photoelectron wave packets at the simulation boundary, enabling accurate simulations with substantially reduced computational cost, even under significant (≈50 % ) double ionization. For the case of a simulation of high-harmonic generation from Ne, for example, 80% cost reduction is achieved, compared to a mask-function absorption boundary.
Human inferior colliculus activity relates to individual differences in spoken language learning.
Chandrasekaran, Bharath; Kraus, Nina; Wong, Patrick C M
2012-03-01
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural "sharpening" models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
Genetic modifications of pigs for medicine and agriculture
Whyte, Jeffrey J.; Prather, Randall S.
2011-01-01
SUMMARY Genetically modified swine hold great promise in the fields of agriculture and medicine. Currently, these swine are being used to optimize production of quality meat, to improve our understanding of the biology of disease resistance, and to reduced waste. In the field of biomedicine, swine are anatomically and physiologically analogous to humans. Alterations of key swine genes in disease pathways provide model animals to improve our understanding of the causes and potential treatments of many human genetic disorders. The completed sequencing of the swine genome will significantly enhance the specificity of genetic modifications, and allow for more accurate representations of human disease based on syntenic genes between the two species. Improvements in both methods of gene alteration and efficiency of model animal production are key to enabling routine use of these swine models in medicine and agriculture. PMID:21671302
NASA Technical Reports Server (NTRS)
Kellner, A.
1987-01-01
Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.
Examining ion channel properties using free-energy methods.
Domene, Carmen; Furini, Simone
2009-01-01
Recent advances in structural biology have revealed the architecture of a number of transmembrane channels, allowing for these complex biological systems to be understood in atomistic detail. Computational simulations are a powerful tool by which the dynamic and energetic properties, and thereby the function of these protein architectures, can be investigated. The experimentally observable properties of a system are often determined more by energetic than dynamics, and therefore understanding the underlying free energy (FE) of biophysical processes is of crucial importance. Critical to the accurate evaluation of FE values are the problems of obtaining accurate sampling of complex biological energy landscapes, and of obtaining accurate representations of the potential energy of a system, this latter problem having been addressed through the development of molecular force fields. While these challenges are common to all FE methods, depending on the system under study, and the questions being asked of it, one technique for FE calculation may be preferable to another, the choice of method and simulation protocol being crucial to achieve efficiency. Applied in a correct manner, FE calculations represent a predictive and affordable computational tool with which to make relevant contact with experiments. This chapter, therefore, aims to give an overview of the most widely implemented computational methods used to calculate the FE associated with particular biochemical or biophysical events, and to highlight their recent applications to ion channels. Copyright © 2009 Elsevier Inc. All rights reserved.
Li, Jun; Jiang, Bin; Song, Hongwei; ...
2015-04-17
Here, we survey the recent advances in theoretical understanding of quantum state resolved dynamics, using the title reactions as examples. It is shown that the progress was made possible by major developments in two areas. First, an accurate analytical representation of many high-level ab initio points over a large configuration space can now be made with high fidelity and the necessary permutation symmetry. The resulting full-dimensional global potential energy surfaces enable dynamical calculations using either quasi-classical trajectory or more importantly quantum mechanical methods. The second advance is the development of accurate and efficient quantum dynamical methods, which are necessary formore » providing a reliable treatment of quantum effects in reaction dynamics such as tunneling, resonances, and zero-point energy. The powerful combination of the two advances has allowed us to achieve a quantitatively accurate characterization of the reaction dynamics, which unveiled rich dynamical features such as steric steering, strong mode specificity, and bond selectivity. The dependence of reactivity on reactant modes can be rationalized by the recently proposed sudden vector projection model, which attributes the mode specificity and bond selectivity to the coupling of reactant modes with the reaction coordinate at the relevant transition state. The deeper insights provided by these theoretical studies have advanced our understanding of reaction dynamics to a new level.« less
EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.
Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua
2012-01-01
Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.
Know thyself: behavioral evidence for a structural representation of the human body.
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. We developed an inter-manual version of the classical "in-between" finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation.
Know Thyself: Behavioral Evidence for a Structural Representation of the Human Body
Rusconi, Elena; Gonzaga, Mirandola; Adriani, Michela; Braun, Christoph; Haggard, Patrick
2009-01-01
Background Representing one's own body is often viewed as a basic form of self-awareness. However, little is known about structural representations of the body in the brain. Methods and Findings We developed an inter-manual version of the classical “in-between” finger gnosis task: participants judged whether the number of untouched fingers between two touched fingers was the same on both hands, or different. We thereby dissociated structural knowledge about fingers, specifying their order and relative position within a hand, from tactile sensory codes. Judgments following stimulation on homologous fingers were consistently more accurate than trials with no or partial homology. Further experiments showed that structural representations are more enduring than purely sensory codes, are used even when number of fingers is irrelevant to the task, and moreover involve an allocentric representation of finger order, independent of hand posture. Conclusions Our results suggest the existence of an allocentric representation of body structure at higher stages of the somatosensory processing pathway, in addition to primary sensory representation. PMID:19412538
On the representation of many-body interactions in water
Medders, Gregory R.; Gotz, Andreas W.; Morales, Miguel A.; ...
2015-09-09
Our recent work has shown that the many-body expansion of the interactionenergy can be used to develop analytical representations of global potential energy surfaces (PESs) for water. In this study, the role of short- and long-range interactions at different orders is investigated by analyzing water potentials that treat the leading terms of the many-body expansion through implicit (i.e., TTM3-F and TTM4-F PESs) and explicit (i.e., WHBB and MB-pol PESs) representations. Moreover, it is found that explicit short-range representations of 2-body and 3-body interactions along with a physically correct incorporation of short- and long-range contributions are necessary for an accurate representationmore » of the waterinteractions from the gas to the condensed phase. Likewise, a complete many-body representation of the dipole moment surface is found to be crucial to reproducing the correct intensities of the infrared spectrum of liquid water.« less
Sanz López, Josep Maria
2017-10-09
A growing academic discussion has focused on how, in a globalized world, LGBTQ identities are shaped and influenced by different and international actors, such as the media. This article analyzes how LGBTQ people from a rural region of a Western country-Spain-feel toward their representations on TV series from English-speaking countries. Employing a qualitative approach, this research aims to depict whether the academic conceptualizations to analyze these identity conformation processes are accurate. In addition, it explores how dominating media representations are being adapted in a region that, although within the West, can serve a context of a very different nature. The results found that a major rejection of the TV series representations among participants can suggest both an inaccuracy of the conceptualizations used by some scholars to understand LGBTQ flows and a problematic LGBTQ representation in media products that goes beyond regions and spaces.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
WURCS 2.0 Update To Encapsulate Ambiguous Carbohydrate Structures.
Matsubara, Masaaki; Aoki-Kinoshita, Kiyoko F; Aoki, Nobuyuki P; Yamada, Issaku; Narimatsu, Hisashi
2017-04-24
Accurate representation of structural ambiguity is important for storing carbohydrate structures containing varying levels of ambiguity in the literature and databases. Although many representations for carbohydrates have been developed in the past, a generalized but discrete representation format did not exist. We had previously developed the Web3 Unique Representation of Carbohydrate Structures (WURCS) in an attempt to define a generalizable and unique linear representation for carbohydrate structures. However, it lacked sufficient rules to uniquely describe ambiguous structures. In this work, we updated WURCS to handle such ambiguous monosaccharide structures. In particular, to handle structural ambiguity around (potential) carbonyl groups incidental to the carbohydrate analysis, we defined a representation of backbone carbons containing atomic-level ambiguity. As a result, we show that WURCS 2.0 can represent a wider variety of carbohydrate structures containing ambiguous monosaccharides, such as those whose ring closure is undefined or whose anomeric information is only known. This new format provides a representation of carbohydrates that was not possible before, and it is currently being used by the International Glycan Structure Repository GlyTouCan.
FERN - a Java framework for stochastic simulation and evaluation of reaction networks.
Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf
2008-08-29
Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.
Characterization of normality of chaotic systems including prediction and detection of anomalies
NASA Astrophysics Data System (ADS)
Engler, Joseph John
Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiolo, M., E-mail: massimo.maiolo@zhaw.ch; ZHAW, Institut für Angewandte Simulation, Grüental, CH-8820 Wädenswil; Vancheri, A., E-mail: alberto.vancheri@supsi.ch
In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficientmore » orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.« less
Efficient discretization in finite difference method
NASA Astrophysics Data System (ADS)
Rozos, Evangelos; Koussis, Antonis; Koutsoyiannis, Demetris
2015-04-01
Finite difference method (FDM) is a plausible and simple method for solving partial differential equations. The standard practice is to use an orthogonal discretization to form algebraic approximate formulations of the derivatives of the unknown function and a grid, much like raster maps, to represent the properties of the function domain. For example, for the solution of the groundwater flow equation, a raster map is required for the characterization of the discretization cells (flow cell, no-flow cell, boundary cell, etc.), and two raster maps are required for the hydraulic conductivity and the storage coefficient. Unfortunately, this simple approach to describe the topology comes along with the known disadvantages of the FDM (rough representation of the geometry of the boundaries, wasted computational resources in the unavoidable expansion of the grid refinement in all cells of the same column and row, etc.). To overcome these disadvantages, Hunt has suggested an alternative approach to describe the topology, the use of an array of neighbours. This limits the need for discretization nodes only for the representation of the boundary conditions and the flow domain. Furthermore, the geometry of the boundaries is described more accurately using a vector representation. Most importantly, graded meshes can be employed, which are capable of restricting grid refinement only in the areas of interest (e.g. regions where hydraulic head varies rapidly, locations of pumping wells, etc.). In this study, we test the Hunt approach against MODFLOW, a well established finite difference model, and the Finite Volume Method with Simplified Integration (FVMSI). The results of this comparison are examined and critically discussed.
Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.
Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi
2017-09-22
DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.
Ceci, Stephen J; Fitneva, Stanka A; Williams, Wendy M
2010-04-01
Traditional accounts of memory development suggest that maturation of prefrontal cortex (PFC) enables efficient metamemory, which enhances memory. An alternative theory is described, in which changes in early memory and metamemory are mediated by representational changes, independent of PFC maturation. In a pilot study and Experiment 1, younger children failed to recognize previously presented pictures, yet the children could identify the context in which they occurred, suggesting these failures resulted from inefficient metamemory. Older children seldom exhibited such failure. Experiment 2 established that this was not due to retrieval-time recoding. Experiment 3 suggested that young children's representation of a picture's attributes explained their metamemory failure. Experiment 4 demonstrated that metamemory is age-invariant when representational quality is controlled: When stimuli were equivalently represented, age differences in memory and metamemory declined. These findings do not support the traditional view that as children develop, neural maturation permits more efficient monitoring, which leads to improved memory. These findings support a theory based on developmental-representational synthesis, in which constraints on metamemory are independent of neurological development; representational features drive early memory to a greater extent than previously acknowledged, suggesting that neural maturation has been overimputed as a source of early metamemory and memory failure. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Auditory spatial representations of the world are compressed in blind humans.
Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J
2017-02-01
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
Age differences in the effects of conscious and unconscious thought in decision making.
Queen, Tara L; Hess, Thomas M
2010-06-01
The roles of unconscious and conscious thought in decision making were investigated to examine both (a) boundary conditions associated with the efficacy of each type of thought and (b) age differences in intuitive versus deliberative thought. Participants were presented with 2 decision tasks, one requiring active deliberation and the other intuitive processing. Young and older adults then engaged in conscious or unconscious thought processing before making a decision. A manipulation check revealed that young adults were more accurate in their representations of the decision material than older adults, which accounted for much of the age-related variation in performance when the full sample was considered. When only accurate participants were considered, decision making was best when there was congruence between the nature of the information and the thought condition. Thus, unconscious thought was more appropriate when participants relied on intuitive rather than deliberative processing to make their decision, whereas the converse was true with conscious thought. Although older adults displayed somewhat less efficient deliberative processing, their ability to process information at the intuitive level was relatively preserved. Additionally, both young and older adults displayed choice-supportive memory. (c) 2010 APA, all rights reserved
Excited state X-ray absorption spectroscopy: Probing both electronic and structural dynamics
NASA Astrophysics Data System (ADS)
Neville, Simon P.; Averbukh, Vitali; Ruberti, Marco; Yun, Renjie; Patchkovskii, Serguei; Chergui, Majed; Stolow, Albert; Schuurman, Michael S.
2016-10-01
We investigate the sensitivity of X-ray absorption spectra, simulated using a general method, to properties of molecular excited states. Recently, Averbukh and co-workers [M. Ruberti et al., J. Chem. Phys. 140, 184107 (2014)] introduced an efficient and accurate L 2 method for the calculation of excited state valence photoionization cross-sections based on the application of Stieltjes imaging to the Lanczos pseudo-spectrum of the algebraic diagrammatic construction (ADC) representation of the electronic Hamiltonian. In this paper, we report an extension of this method to the calculation of excited state core photoionization cross-sections. We demonstrate that, at the ADC(2)x level of theory, ground state X-ray absorption spectra may be accurately reproduced, validating the method. Significantly, the calculated X-ray absorption spectra of the excited states are found to be sensitive to both geometric distortions (structural dynamics) and the electronic character (electronic dynamics) of the initial state, suggesting that core excitation spectroscopies will be useful probes of excited state non-adiabatic dynamics. We anticipate that the method presented here can be combined with ab initio molecular dynamics calculations to simulate the time-resolved X-ray spectroscopy of excited state molecular wavepacket dynamics.
A texture-based framework for improving CFD data visualization in a virtual environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bivins, Gerrick O'Ron
2005-01-01
In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated hut require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are notmore » limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions hut limiting interaction for investigating the field.« less
A texture-based frameowrk for improving CFD data visualization in a virtual environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bivins, Gerrick O'Ron
2005-01-01
In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, ~ 10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions aremore » not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions but limiting interaction for investigating the field.« less
Ordinal feature selection for iris and palmprint recognition.
Sun, Zhenan; Wang, Libin; Tan, Tieniu
2014-09-01
Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Kather, Jakob Nikolas; Marx, Alexander; Reyes-Aldasoro, Constantino Carlos; Schad, Lothar R; Zöllner, Frank Gerrit; Weis, Cleo-Aron
2015-08-07
Blood vessels in solid tumors are not randomly distributed, but are clustered in angiogenic hotspots. Tumor microvessel density (MVD) within these hotspots correlates with patient survival and is widely used both in diagnostic routine and in clinical trials. Still, these hotspots are usually subjectively defined. There is no unbiased, continuous and explicit representation of tumor vessel distribution in histological whole slide images. This shortcoming distorts angiogenesis measurements and may account for ambiguous results in the literature. In the present study, we describe and evaluate a new method that eliminates this bias and makes angiogenesis quantification more objective and more efficient. Our approach involves automatic slide scanning, automatic image analysis and spatial statistical analysis. By comparing a continuous MVD function of the actual sample to random point patterns, we introduce an objective criterion for hotspot detection: An angiogenic hotspot is defined as a clustering of blood vessels that is very unlikely to occur randomly. We evaluate the proposed method in N=11 images of human colorectal carcinoma samples and compare the results to a blinded human observer. For the first time, we demonstrate the existence of statistically significant hotspots in tumor images and provide a tool to accurately detect these hotspots.
Weakly Supervised Dictionary Learning
NASA Astrophysics Data System (ADS)
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
The analytical design of spectral measurements for multispectral remote sensor systems
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)
1979-01-01
The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.
Building on prior knowledge without building it in.
Hansen, Steven S; Lampinen, Andrew K; Suri, Gaurav; McClelland, James L
2017-01-01
Lake et al. propose that people rely on "start-up software," "causal models," and "intuitive theories" built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Automated Diagnosis Coding with Combined Text Representations.
Berndorfer, Stefan; Henriksson, Aron
2017-01-01
Automated diagnosis coding can be provided efficiently by learning predictive models from historical data; however, discriminating between thousands of codes while allowing a variable number of codes to be assigned is extremely difficult. Here, we explore various text representations and classification models for assigning ICD-9 codes to discharge summaries in MIMIC-III. It is shown that the relative effectiveness of the investigated representations depends on the frequency of the diagnosis code under consideration and that the best performance is obtained by combining models built using different representations.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.
Yang, Yimin; Wu, Q M Jonathan
2016-11-01
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.
The Interaction between Semantic Representation and Episodic Memory.
Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen
2018-02-01
The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.
Learning semantic histopathological representation for basal cell carcinoma classification
NASA Astrophysics Data System (ADS)
Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo
2013-03-01
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
Human inferior colliculus activity relates to individual differences in spoken language learning
Chandrasekaran, Bharath; Kraus, Nina
2012-01-01
A challenge to learning words of a foreign language is encoding nonnative phonemes, a process typically attributed to cortical circuitry. Using multimodal imaging methods [functional magnetic resonance imaging-adaptation (fMRI-A) and auditory brain stem responses (ABR)], we examined the extent to which pretraining pitch encoding in the inferior colliculus (IC), a primary midbrain structure, related to individual variability in learning to successfully use nonnative pitch patterns to distinguish words in American English-speaking adults. fMRI-A indexed the efficiency of pitch representation localized to the IC, whereas ABR quantified midbrain pitch-related activity with millisecond precision. In line with neural “sharpening” models, we found that efficient IC pitch pattern representation (indexed by fMRI) related to superior neural representation of pitch patterns (indexed by ABR), and consequently more successful word learning following sound-to-meaning training. Our results establish a critical role for the IC in speech-sound representation, consistent with the established role for the IC in the representation of communication signals in other animal models. PMID:22131377
Airplane Mesh Development with Grid Density Studies
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Baker, Timothy J.; Thomas, Scott D.; Lawrence, Scott L.; Rimlinger, Mark J.
1999-01-01
Automatic Grid Generation Wish List Geometry handling, including CAD clean up and mesh generation, remains a major bottleneck in the application of CFD methods. There is a pressing need for greater automation in several aspects of the geometry preparation in order to reduce set up time and eliminate user intervention as much as possible. Starting from the CAD representation of a configuration, there may be holes or overlapping surfaces which require an intensive effort to establish cleanly abutting surface patches, and collections of many patches may need to be combined for more efficient use of the geometrical representation. Obtaining an accurate and suitable body conforming grid with an adequate distribution of points throughout the flow-field, for the flow conditions of interest, is often the most time consuming task for complex CFD applications. There is a need for a clean unambiguous definition of the CAD geometry. Ideally this would be carried out automatically by smart CAD clean up software. One could also define a standard piece-wise smooth surface representation suitable for use by computational methods and then create software to translate between the various CAD descriptions and the standard representation. Surface meshing remains a time consuming, user intensive procedure. There is a need for automated surface meshing, requiring only minimal user intervention to define the overall density of mesh points. The surface mesher should produce well shaped elements (triangles or quadrilaterals) whose size is determined initially according to the surface curvature with a minimum size for flat pieces, and later refined by the user in other regions if necessary. Present techniques for volume meshing all require some degree of user intervention. There is a need for fully automated and reliable volume mesh generation. In addition, it should be possible to create both surface and volume meshes that meet guaranteed measures of mesh quality (e.g. minimum and maximum angle, stretching ratios, etc.).
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
NASA Astrophysics Data System (ADS)
Pigazzini, M. S.; Bazilevs, Y.; Ellison, A.; Kim, H.
2017-11-01
In this two-part paper we introduce a new formulation for modeling progressive damage in laminated composite structures. We adopt a multi-layer modeling approach, based on isogeometric analysis, where each ply or lamina is represented by a spline surface, and modeled as a Kirchhoff-Love thin shell. Continuum damage mechanics is used to model intralaminar damage, and a new zero-thickness cohesive-interface formulation is introduced to model delamination as well as permitting laminate-level transverse shear compliance. In Part I of this series we focus on the presentation of the modeling framework, validation of the framework using standard Mode I and Mode II delamination tests, and assessment of its suitability for modeling thick laminates. In Part II of this series we focus on the application of the proposed framework to modeling and simulation of damage in composite laminates resulting from impact. The proposed approach has significant accuracy and efficiency advantages over existing methods for modeling impact damage. These stem from the use of IGA-based Kirchhoff-Love shells to represent the individual plies of the composite laminate, while the compliant cohesive interfaces enable transverse shear deformation of the laminate. Kirchhoff-Love shells give a faithful representation of the ply deformation behavior, and, unlike solids or traditional shear-deformable shells, do not suffer from transverse-shear locking in the limit of vanishing thickness. This, in combination with higher-order accurate and smooth representation of the shell midsurface displacement field, allows us to adopt relatively coarse in-plane discretizations without sacrificing solution accuracy. Furthermore, the thin-shell formulation employed does not use rotational degrees of freedom, which gives additional efficiency benefits relative to more standard shell formulations.
NASA Astrophysics Data System (ADS)
Bazilevs, Y.; Pigazzini, M. S.; Ellison, A.; Kim, H.
2017-11-01
In this two-part paper we introduce a new formulation for modeling progressive damage in laminated composite structures. We adopt a multi-layer modeling approach, based on Isogeometric Analysis (IGA), where each ply or lamina is represented by a spline surface, and modeled as a Kirchhoff-Love thin shell. Continuum Damage Mechanics is used to model intralaminar damage, and a new zero-thickness cohesive-interface formulation is introduced to model delamination as well as permitting laminate-level transverse shear compliance. In Part I of this series we focus on the presentation of the modeling framework, validation of the framework using standard Mode I and Mode II delamination tests, and assessment of its suitability for modeling thick laminates. In Part II of this series we focus on the application of the proposed framework to modeling and simulation of damage in composite laminates resulting from impact. The proposed approach has significant accuracy and efficiency advantages over existing methods for modeling impact damage. These stem from the use of IGA-based Kirchhoff-Love shells to represent the individual plies of the composite laminate, while the compliant cohesive interfaces enable transverse shear deformation of the laminate. Kirchhoff-Love shells give a faithful representation of the ply deformation behavior, and, unlike solids or traditional shear-deformable shells, do not suffer from transverse-shear locking in the limit of vanishing thickness. This, in combination with higher-order accurate and smooth representation of the shell midsurface displacement field, allows us to adopt relatively coarse in-plane discretizations without sacrificing solution accuracy. Furthermore, the thin-shell formulation employed does not use rotational degrees of freedom, which gives additional efficiency benefits relative to more standard shell formulations.
Code of Federal Regulations, 2013 CFR
2013-01-01
... certification by a person with knowledge of the facts that the representations made in the Petition are accurate... that the statements contained in the Petition are true and complete to the best of my knowledge. [Name...
Code of Federal Regulations, 2012 CFR
2012-01-01
... certification by a person with knowledge of the facts that the representations made in the Petition are accurate... that the statements contained in the Petition are true and complete to the best of my knowledge. [Name...
Code of Federal Regulations, 2011 CFR
2011-01-01
... certification by a person with knowledge of the facts that the representations made in the Petition are accurate... that the statements contained in the Petition are true and complete to the best of my knowledge. [Name...
Code of Federal Regulations, 2010 CFR
2010-01-01
... certification by a person with knowledge of the facts that the representations made in the Petition are accurate... that the statements contained in the Petition are true and complete to the best of my knowledge. [Name...
Implicit Self-Importance in an Interpersonal Pronoun Categorization Task.
Fetterman, Adam K; Robinson, Michael D; Gilbertson, Elizabeth P
2014-06-01
Object relations theories emphasize the manner in which the salience/importance of implicit representations of self and other guide interpersonal functioning. Two studies and a pilot test (total N = 304) sought to model such representations. In dyadic contexts, the self is a "you" and the other is a "me", as verified in a pilot test. Study 1 then used a simple categorization task and found evidence for implicit self-importance: The pronoun "you" was categorized more quickly and accurately when presented in a larger font size, whereas the pronoun "me" was categorized more quickly and accurately when presented in a smaller font size. Study 2 showed that this pattern possesses value in understanding individual differences in interpersonal functioning. As predicted, arrogant people scored higher in implicit self-importance in the paradigm. Findings are discussed from the perspective of dyadic interpersonal dynamics.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Stochastic Analysis and Design of Heterogeneous Microstructural Materials System
NASA Astrophysics Data System (ADS)
Xu, Hongyi
Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the dimension of microstructure design space. This dissertation develops a machine learning-based methodology to identify the key microstructure descriptors that highly impact properties of interest. In uncertainty quantification, a comparative study on data-driven random process models is conducted to provide guidance for choosing the most accurate model in statistical uncertainty quantification. Two new goodness-of-fit metrics are developed to provide quantitative measurements of random process models' accuracy. The benefits of the proposed methods are demonstrated by the example of designing the microstructure of polymer nanocomposites. This dissertation provides material-generic, intelligent modeling/design methodologies and techniques to accelerate the process of analyzing and designing new microstructural materials system.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Calculations of rate constants for the three-body recombination of H2 in the presence of H2
NASA Technical Reports Server (NTRS)
Schwenke, David W.
1988-01-01
A new global potential energy hypersurface for H2 + H2 is constructed and quasiclassical trajectory calculations performed using the resonance complex theory and energy transfer mechanism to estimate the rate of three body recombination over the temperature range 100 to 5000 K. The new potential is a faithful representation of ab initio electron structure calculations, is unchanged under the operation of exchanging H atoms, and reproduces the accurate H3 potential as one H atom is pulled away. Included in the fitting procedure are geometries expected to be important when one H2 is near or above the dissociation limit. The dynamics calculations explicitly include the motion of all four atoms and are performed efficiently using a vectorized variable-stepsize integrator. The predicted rate constants are approximately a factor of two smaller than experimental estimates over a broad temperature range.
Animating streamlines with repeated asymmetric patterns for steady flow visualization
NASA Astrophysics Data System (ADS)
Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee
2012-01-01
Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.
Impacts of controlling biomass burning emissions on wintertime carbonaceous aerosol in Europe
NASA Astrophysics Data System (ADS)
Fountoukis, C.; Butler, T.; Lawrence, M. G.; Denier van der Gon, H. A. C.; Visschedijk, A. J. H.; Charalampidis, P.; Pilinis, C.; Pandis, S. N.
2014-04-01
We use a 3-D regional chemical transport model, with the latest advancements in the organic aerosol (OA) treatment, and an updated emission inventory for wood combustion to study the organic aerosol change in response to the replacement of current residential wood combustion technologies with pellet stoves. Simulations show a large decrease of fine organic aerosol (more than 60%) in urban and suburban areas during winter and decreases of 30-50% in elemental carbon levels in large parts of Europe. There is also a considerable decrease (around 40%) of oxidized OA, mostly in rural and remote regions. Total PM2.5 mass is predicted to decrease by 15-40% on average during the winter in continental Europe. Accurate representation of the intermediate volatility precursors of organic aerosol in the emission inventory is crucial in assessing the efficiency of such abatement strategies.
Evaluating WRF Simulations of Urban Boundary Layer Processes during DISCOVER-AQ
NASA Astrophysics Data System (ADS)
Hegarty, J. D.; Henderson, J.; Lewis, J. R.; McGrath-Spangler, E. L.; Scarino, A. J.; Ferrare, R. A.; DeCola, P.; Welton, E. J.
2015-12-01
The accurate representation of processes in the planetary boundary layer (PBL) in meteorological models is of prime importance to air quality and greenhouse gas simulations as it governs the depth to which surface emissions are vertically mixed and influences the efficiency by which they are transported downwind. In this work we evaluate high resolution (~1 km) WRF simulations of PBL processes in the Washington DC - Baltimore and Houston urban areas during the respective DISCOVER-AQ 2011 and 2013 field campaigns using MPLNET micro-pulse lidar (MPL), mini-MPL, airborne high spectral resolution lidar (HSRL), Doppler wind profiler and CALIPSO satellite measurements along with complimentary surface and aircraft measurements. We will discuss how well WRF simulates the spatiotemporal variability of the PBL height in the urban areas and the development of fine-scale meteorological features such as bay and sea breezes that influence the air quality of the urban areas studied.
Statistical Study of the Properties of Magnetosheath Lion Roars using MMS observations
NASA Astrophysics Data System (ADS)
Giagkiozis, S.; Wilson, L. B., III
2017-12-01
Intense whistler-mode waves of very short duration are frequently encountered in the magnetosheath. These emissions have been linked to mirror mode waves and the Earth's bow shock. They can efficiently transfer energy between different plasma populations. These electromagnetic waves are commonly referred to as Lion roars (LR), due to the sound generated when the signals are sonified. They are generally observed during dips of the magnetic field that are anti-correlated with increases of density. Using MMS data, we have identified more than 1750 individual LR burst intervals. Each emission was band-pass filtered and further split into >35,000 subintervals, for which the direction of propagation and the polarization were calculated. The analysis of subinterval properties provides a more accurate representation of their true nature than the more commonly used time- and frequency-averaged dynamic spectra analysis. The results of the statistical analysis of the wave properties will be presented.
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand
2010-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand
2009-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.
3D Visualization of Cooperative Trajectories
NASA Technical Reports Server (NTRS)
Schaefer, John A.
2014-01-01
Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Assessment of auditory distance in a territorial songbird: accurate feat or rule of thumb?
Naguib; Klump; Hillmann; Grießmann; Teige
2000-04-01
Territorial passerines presumably benefit from their ability to use auditory cues to judge the distance to singing conspecifics, by increasing the efficiency of their territorial defence. Here, we report data on the approach of male territorial chaffinches, Fringilla coelebs, to a loudspeaker broadcasting conspecific song simulating a rival at various distances by different amounts of song degradation. Songs were degraded digitally in a computer-simulated forest emulating distances of 0, 20, 40, 80 and 120 m. The approach distance of chaffinches towards the loudspeaker increased with increasing amounts of degradation indicating a perceptual representation of differences in distance of a sound source. We discuss the interindividual variation of male responses with respect to constraints resulting from random variation of ranging cues provided by the environmental song degradation, the perception accuracy and the decision rules. Copyright 2000 The Association for the Study of Animal Behaviour.
NASA Astrophysics Data System (ADS)
Cardinale, T.; Valva, R.; Lucarelli, M.
2013-02-01
The Summer School of Surveying and 3D modelling in Paestum was an opportunity to explore the use of innovative tools and advanced techniques in the design, implementation and management of surveys of historic and artistic complexes. In general such methods are used specifically for the development and management of vulnerability maps of existing heritage and so for the preventive conservation and valorisation of the built environment. The accurate detection of risk situations and the systematic promotion of highly selected and minimally invasive maintenance practices means that restoration and the efficiency of cycles of intervention can be optimized, with clear benefits from economic and cultural points of view. The group worked on the survey and 3D modelling of the Temple of Neptune, the Sphinx and the Metope of the Archaeological Park in Paestum.
Noisy metrology: a saturable lower bound on quantum Fisher information
NASA Astrophysics Data System (ADS)
Yousefjani, R.; Salimi, S.; Khorashad, A. S.
2017-06-01
In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
NASA Technical Reports Server (NTRS)
Shih, Ming H.; Soni, Bharat K.
1993-01-01
The issue of time efficiency in grid generation is addressed by developing a user friendly graphical interface for interactive/automatic construction of structured grids around complex turbomachinery/axis-symmetric configurations. The accuracy of geometry modeling and its fidelity is accomplished by adapting the nonuniform rational b-spline (NURBS) representation. A customized interactive grid generation code, TIGER, has been developed to facilitate the grid generation process for complicated internal, external, and internal-external turbomachinery fields simulations. The FORMS Library is utilized to build user-friendly graphical interface. The algorithm allows a user to redistribute grid points interactively on curves/surfaces using NURBS formulation with accurate geometric definition. TIGER's features include multiblock, multiduct/shroud, multiblade row, uneven blade count, and patched/overlapping block interfaces. It has been applied to generate grids for various complicated turbomachinery geometries, as well as rocket and missile configurations.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
Random Versus Blocked Practice to Enhance Mental Representation in Golf Putting.
Fazeli, Davoud; Taheri, HamidReza; Saberi Kakhki, Alireza
2017-06-01
The purpose of this study was to investigate changes in mental representation from either random or blocked practice when engaged in golf putting. Thirty participants were randomly assigned to random practice, blocked practice, and no-practice groups. First, we measured novice golfers' initial mental representation levels and required them to perform 18 putting trials as a pre-test. We then asked random and blocked groups to practice in accordance with their group assignment for six consecutive days (10 blocks each day, 18 trials each). A week after the last practice session, we re-measured all participants' final mental representation levels and required them to perform 18 putting trials to evaluate learning retention through practice. While those engaged in the random practice method putted more poorly during acquisition (i.e., practice) than those in blocked practice, the random practice group experienced more accurate retention during the final putting trials, and they showed a more structured mental representation than those in blocked practice, one that was more similar to that of skilled golfers. These results support the acquisition of a rich mental representation through random versus blocked practice.
A scale-invariant internal representation of time.
Shankar, Karthik H; Howard, Marc W
2012-01-01
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.
Reduced set averaging of face identity in children and adolescents with autism.
Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina
2015-01-01
Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.
3D hierarchical spatial representation and memory of multimodal sensory data
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Dow, Paul A.; Huber, David J.
2009-04-01
This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine/robot degrees of freedom, the desired movements and action can be computed from these different levels in the hierarchy. The most basic embodiment of this machine could be a pan-tilt camera system, an array of microphones, a machine with arm/hand like structure or/and a robot with some or all of the above capabilities. We describe the approach, system and present preliminary results on a real-robotic platform.
Fidelity of the representation of value in decision-making
Dowding, Ben A.
2017-01-01
The ability to make optimal decisions depends on evaluating the expected rewards associated with different potential actions. This process is critically dependent on the fidelity with which reward value information can be maintained in the nervous system. Here we directly probe the fidelity of value representation following a standard reinforcement learning task. The results demonstrate a previously-unrecognized bias in the representation of value: extreme reward values, both low and high, are stored significantly more accurately and precisely than intermediate rewards. The symmetry between low and high rewards pertained despite substantially higher frequency of exposure to high rewards, resulting from preferential exploitation of more rewarding options. The observed variation in fidelity of value representation retrospectively predicted performance on the reinforcement learning task, demonstrating that the bias in representation has an impact on decision-making. A second experiment in which one or other extreme-valued option was omitted from the learning sequence showed that representational fidelity is primarily determined by the relative position of an encoded value on the scale of rewards experienced during learning. Both variability and guessing decreased with the reduction in the number of options, consistent with allocation of a limited representational resource. These findings have implications for existing models of reward-based learning, which typically assume defectless representation of reward value. PMID:28248958
The Role of Task Understanding on Younger and Older Adults' Performance.
Frank, David J; Touron, Dayna R
2016-12-16
Age-related performance decrements have been linked to inferior strategic choices. Strategy selection models argue that accurate task representations are necessary for choosing appropriate strategies. But no studies to date have compared task representations in younger and older adults. Metacognition research suggests age-related deficits in updating and utilizing strategy knowledge, but other research suggests age-related sparing when information can be consolidated into a coherent mental model. Study 1 validated the use of concept mapping as a tool for measuring task representation accuracy. Study 2 measured task representations before and after a complex strategic task to test for age-related decrements in task representation formation and updating. Task representation accuracy and task performance were equivalent across age groups. Better task representations were related to better performance. However, task representation scores remained fairly stable over the task with minimal evidence of updating. Our findings mirror those in the mental model literature suggesting age-related sparing of strategy use when information can be integrated into a coherent mental model. Future research should manipulate the presence of a unifying context to better evaluate this hypothesis. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
With age comes representational wisdom in social signals.
van Rijsbergen, Nicola; Jaworska, Katarzyna; Rousselet, Guillaume A; Schyns, Philippe G
2014-12-01
In an increasingly aging society, age has become a foundational dimension of social grouping broadly targeted by advertising and governmental policies. However, perception of old age induces mainly strong negative social biases. To characterize their cognitive and perceptual foundations, we modeled the mental representations of faces associated with three age groups (young age, middle age, and old age), in younger and older participants. We then validated the accuracy of each mental representation of age with independent validators. Using statistical image processing, we identified the features of mental representations that predict perceived age. Here, we show that whereas younger people mentally dichotomize aging into two groups, themselves (younger) and others (older), older participants faithfully represent the features of young age, middle age, and old age, with richer representations of all considered ages. Our results demonstrate that, contrary to popular public belief, older minds depict socially relevant information more accurately than their younger counterparts. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
A hierarchical structure for representing and learning fuzzy rules
NASA Technical Reports Server (NTRS)
Yager, Ronald R.
1993-01-01
Yager provides an example in which the flat representation of fuzzy if-then rules leads to unsatisfactory results. Consider a rule base consisting to two rules: if U is 12 the V is 29; if U is (10-15) the V is (25-30). If U = 12 we would get V is G where G = (25-30). The application of the defuzzification process leads to a selection of V = 27.5. Thus we see that the very specific instruction was not followed. The problem with the technique used is that the most specific information was swamped by the less specific information. In this paper we shall provide for a new structure for the representation of fuzzy if-then rules. The representational form introduced here is called a Hierarchical Prioritized Structure (HPS) representation. Most importantly in addition to overcoming the problem illustrated in the previous example this HPS representation has an inherent capability to emulate the learning of general rules and provides a reasonable accurate cognitive mapping of how human beings store information.
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
An ellipsoidal representation of human hand anthropometry
NASA Technical Reports Server (NTRS)
Buchholz, Bryan; Armstrong, Thomas J.
1991-01-01
Anthropometric data concerning the heometry of the hand's surface are presently modeled as a function of gross external hand measurements; an effort is made to evaluate the accuracy with which ellipsoids describe the geometry of the hand segments. Graphical comparisons indicate that differences between the ellipsoidal approximations and the breadth and depth measurements were greatest near the joints. On the bases of the present data, a set of overlapping ellipsoids could furnish a more accurate representation of hand geometry for adaptation to ellipsoid segment-geometry employing biomechanical models.
Boyer, C; Baujard, V; Scherrer, J R
2001-01-01
Any new user to the Internet will think that to retrieve the relevant document is an easy task especially with the wealth of sources available on this medium, but this is not the case. Even experienced users have difficulty formulating the right query for making the most of a search tool in order to efficiently obtain an accurate result. The goal of this work is to reduce the time and the energy necessary in searching and locating medical and health information. To reach this goal we have developed HONselect [1]. The aim of HONselect is not only to improve efficiency in retrieving documents but to respond to an increased need for obtaining a selection of relevant and accurate documents from a breadth of various knowledge databases including scientific bibliographical references, clinical trials, daily news, multimedia illustrations, conferences, forum, Web sites, clinical cases, and others. The authors based their approach on the knowledge representation using the National Library of Medicine's Medical Subject Headings (NLM, MeSH) vocabulary and classification [2,3]. The innovation is to propose a multilingual "one-stop searching" (one Web interface to databases currently in English, French and German) with full navigational and connectivity capabilities. The user may choose from a given selection of related terms the one that best suit his search, navigate in the term's hierarchical tree, and access directly to a selection of documents from high quality knowledge suppliers such as the MEDLINE database, the NLM's ClinicalTrials.gov server, the NewsPage's daily news, the HON's media gallery, conference listings and MedHunt's Web sites [4, 5, 6, 7, 8, 9]. HONselect, developed by HON, a non-profit organisation [10], is a free online available multilingual tool based on the MeSH thesaurus to index, select, retrieve and display accurate, up to date, high-level and quality documents.
An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping
NASA Astrophysics Data System (ADS)
Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare
2017-04-01
Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-01-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary. PMID:26942233
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution.
Song, Yantao; Wu, Guorong; Sun, Quansen; Bahrami, Khosro; Li, Chunming; Shen, Dinggang
2015-10-01
Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.
Impact of representation of hydraulic structures in modelling a Severn barrage
NASA Astrophysics Data System (ADS)
Bray, Samuel; Ahmadian, Reza; Falconer, Roger A.
2016-04-01
In this study, enhancements to the numerical representation of sluice gates and turbines were made to the hydro-environmental model Environmental Fluid Dynamics Code (EFDC), and applied to the Severn Tidal Power Group Cardiff-Weston Barrage. The extended domain of the EFDC Continental Shelf Model (CSM) allows far-field hydrodynamic impact assessment of the Severn Barrage, pre- and post-enhancement, to demonstrate the importance of accurate hydraulic structure representation. The enhancements were found to significantly affect peak water levels in the Bristol Channel, reducing levels by nearly 1 m in some areas, and even affect predictions as far-field as the West Coast of Scotland, albeit to a far lesser extent. The model was tested for sensitivity to changes in the discharge coefficient, Cd, used in calculating discharge through sluice gates and turbines. It was found that the performance of the Severn Barrage is not sensitive to changes to the Cd value, and is mitigated through the continual, rather than instantaneous, discharge across the structure. The EFDC CSM can now be said to be more accurately predicting the impacts of tidal range proposals, and the investigation of sensitivity to Cd improves the confidence in the modelling results, despite the uncertainty in this coefficient.
A link prediction approach to cancer drug sensitivity prediction.
Turki, Turki; Wei, Zhi
2017-10-03
Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.
Frost, Shawn B; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J
2013-08-01
The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for reliably locating cortical motor representations of the hindlimb. Intracortical microstimulation techniques were used to derive detailed maps of the hindlimb motor representations in 6 adult Fischer-344 rats. The organization of the hindlimb movement representation, while variable across individual rats in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and posterolateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 ± 0.50 mm(2). Superimposing individual maps revealed an overlapping area measuring 0.35 mm(2), indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25-3.75 mm posterior to the bregma, with an average center location approximately 2.6 mm posterior to the bregma. Likewise, the hindlimb representation was found 1-3.25 mm lateral to the midline, with an average center location approximately 2 mm lateral to the midline. The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to the bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being increasingly used in the development of brain-computer interfaces for restoration of function after spinal cord injury.
Reliability in the Location of Hindlimb Motor Representations in Fischer-344 Rats
Frost, Shawn B.; Iliakova, Maria; Dunham, Caleb; Barbay, Scott; Arnold, Paul; Nudo, Randolph J.
2014-01-01
Object The purpose of the present study was to determine the feasibility of using a common laboratory rat strain for locating cortical motor representations of the hindlimb reliably. Methods Intracortical Microstimulation (ICMS) techniques were used to derive detailed maps of the hindlimb motor representations in six adult Fischer-344 rats. Results The organization of the hindlimb movement representation, while variable across individuals in topographic detail, displayed several commonalities. The hindlimb representation was positioned posterior to the forelimb motor representation and postero-lateral to the motor trunk representation. The areal extent of the hindlimb representation across the cortical surface averaged 2.00 +/− 0.50 mm2. Superimposing individual maps revealed an overlapping area measuring 0.35 mm2, indicating that the location of the hindlimb representation can be predicted reliably based on stereotactic coordinates. Across the sample of rats, the hindlimb representation was found 1.25–3.75 mm posterior to Bregma, with an average center location ~ 2.6 mm posterior to Bregma. Likewise, the hindlimb representation was found 1–3.25 mm lateral to the midline, with an average center location ~ 2 mm lateral to midline. Conclusions The location of the cortical hindlimb motor representation in Fischer-344 rats can be reliably located based on its stereotactic position posterior to Bregma and lateral to the longitudinal skull suture at midline. The ability to accurately predict the cortical localization of functional hindlimb territories in a rodent model is important, as such animal models are being used increasingly in the development of brain-computer interfaces for restoration of function after spinal cord injury. PMID:23725395
Roldan, Stephanie M
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Roldan, Stephanie M.
2017-01-01
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538
Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.
Wang, Xiudong; Gu, Yuantao
2017-05-10
This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.
Multiple Sparse Representations Classification
Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik
2015-01-01
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106
Computer Simulation of Electron Positron Annihilation Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, y
2003-10-02
With the launching of the Next Linear Collider coming closer and closer, there is a pressing need for physicists to develop a fully-integrated computer simulation of e{sup +}e{sup -} annihilation process at center-of-mass energy of 1TeV. A simulation program acts as the template for future experiments. Either new physics will be discovered, or current theoretical uncertainties will shrink due to more accurate higher-order radiative correction calculations. The existence of an efficient and accurate simulation will help us understand the new data and validate (or veto) some of the theoretical models developed to explain new physics. It should handle well interfacesmore » between different sectors of physics, e.g., interactions happening at parton levels well above the QCD scale which are described by perturbative QCD, and interactions happening at much lower energy scale, which combine partons into hadrons. Also it should achieve competitive speed in real time when the complexity of the simulation increases. This thesis contributes some tools that will be useful for the development of such simulation programs. We begin our study by the development of a new Monte Carlo algorithm intended to perform efficiently in selecting weight-1 events when multiple parameter dimensions are strongly correlated. The algorithm first seeks to model the peaks of the distribution by features, adapting these features to the function using the EM algorithm. The representation of the distribution provided by these features is then improved using the VEGAS algorithm for the Monte Carlo integration. The two strategies mesh neatly into an effective multi-channel adaptive representation. We then present a new algorithm for the simulation of parton shower processes in high energy QCD. We want to find an algorithm which is free of negative weights, produces its output as a set of exclusive events, and whose total rate exactly matches the full Feynman amplitude calculation. Our strategy is to create the whole QCD shower as a tree structure generated by a multiple Poisson process. Working with the whole shower allows us to include correlations between gluon emissions from different sources. QCD destructive interference is controlled by the implementation of ''angular-ordering,'' as in the HERWIG Monte Carlo program. We discuss methods for systematic improvement of the approach to include higher order QCD effects.« less
Improving the Operations of the Earth Observing One Mission via Automated Mission Planning
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Tran, Daniel; Rabideau, Gregg; Schaffer, Steve; Mandl, Daniel; Frye, Stuart
2010-01-01
We describe the modeling and reasoning about operations constraints in an automated mission planning system for an earth observing satellite - EO-1. We first discuss the large number of elements that can be naturally represented in an expressive planning and scheduling framework. We then describe a number of constraints that challenge the current state of the art in automated planning systems and discuss how we modeled these constraints as well as discuss tradeoffs in representation versus efficiency. Finally we describe the challenges in efficiently generating operations plans for this mission. These discussions involve lessons learned from an operations model that has been in use since Fall 2004 (called R4) as well as a newer more accurate operations model operational since June 2009 (called R5). We present analysis of the R5 software documenting a significant (greater than 50%) increase in the number of weekly observations scheduled by the EO-1 mission. We also show that the R5 mission planning system produces schedules within 15% of an upper bound on optimal schedules. This operational enhancement has created value of millions of dollars US over the projected remaining lifetime of the EO-1 mission.
Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung
2011-08-01
Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.
Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie
2009-08-01
We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.
An improved SRC method based on virtual samples for face recognition
NASA Astrophysics Data System (ADS)
Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao
2018-07-01
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.
Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus
2007-01-01
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Fabelo, Himar; Ortega, Samuel; Ravi, Daniele; Kiran, B Ravi; Sosa, Coralia; Bulters, Diederik; Callicó, Gustavo M; Bulstrode, Harry; Szolna, Adam; Piñeiro, Juan F; Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O'Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
Kabwama, Silvester; Madroñal, Daniel; Lazcano, Raquel; J-O’Shanahan, Aruma; Bisshopp, Sara; Hernández, María; Báez, Abelardo; Yang, Guang-Zhong; Stanciulescu, Bogdan; Salvador, Rubén; Juárez, Eduardo; Sarmiento, Roberto
2018-01-01
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area. PMID:29554126
Particles, Feynman Diagrams and All That
ERIC Educational Resources Information Center
Daniel, Michael
2006-01-01
Quantum fields are introduced in order to give students an accurate qualitative understanding of the origin of Feynman diagrams as representations of particle interactions. Elementary diagrams are combined to produce diagrams representing the main features of the Standard Model.
ENHANCING HSPF MODEL CHANNEL HYDRAULIC REPRESENTATION
The Hydrological Simulation Program - FORTRAN (HSPF) is a comprehensive watershed model, which employs depth-area-volume-flow relationships known as hydraulic function table (FTABLE) to represent stream channel cross-sections and reservoirs. An accurate FTABLE determination for a...
Reconstructing householder vectors from Tall-Skinny QR
Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...
2015-08-05
The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less
Representations of non-suicidal self-injury in motion pictures.
Trewavas, Christopher; Hasking, Penelope; McAllister, Margaret
2010-01-01
The aim of this study was to investigate representations of non-suicidal self-injury (NSSI) in popular media. Forty-one motion pictures were viewed, coded, and analyzed. NSSI was correlated with mental illness, child maltreatment, and substance abuse. NSSI was generally portrayed as severe, habitual and covert. Further, depictions of NSSI were often sensationalized and featured prominently. NSSI was less likely to be associated with completed suicide than other psychological factors, but more closely associated with suicide than NSSI is in the community. Although NSSI was associated with psychiatric illness, few characters were receiving psychiatric care at the time of NSSI. However a significant proportion received support after engaging in NSSI. The portrayal of NSSI is generally accurate regarding correlates and function, but is inaccurately associated with suicide. Implications of the relatively accurate portrayal of NSSI are discussed in light of the potential for imitation, and the possibility of using cinematherapy to promote effective problem resolution.
Issack, Bilkiss B; Roy, Pierre-Nicholas
2005-08-22
An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.
Reading your own lips: common-coding theory and visual speech perception.
Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel; Hale, Sandra; Sommers, Mitchell S
2013-02-01
Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.
Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.
Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K
2011-09-01
Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.
Implicit Self-Importance in an Interpersonal Pronoun Categorization Task
Fetterman, Adam K.; Robinson, Michael D.; Gilbertson, Elizabeth P.
2014-01-01
Object relations theories emphasize the manner in which the salience/importance of implicit representations of self and other guide interpersonal functioning. Two studies and a pilot test (total N = 304) sought to model such representations. In dyadic contexts, the self is a “you” and the other is a “me”, as verified in a pilot test. Study 1 then used a simple categorization task and found evidence for implicit self-importance: The pronoun “you” was categorized more quickly and accurately when presented in a larger font size, whereas the pronoun “me” was categorized more quickly and accurately when presented in a smaller font size. Study 2 showed that this pattern possesses value in understanding individual differences in interpersonal functioning. As predicted, arrogant people scored higher in implicit self-importance in the paradigm. Findings are discussed from the perspective of dyadic interpersonal dynamics. PMID:25419089
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Alternative transitions between existing representations in multi-scale maps
NASA Astrophysics Data System (ADS)
Dumont, Marion; Touya, Guillaume; Duchêne, Cécile
2018-05-01
Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.
Representing Medical Knowledge in a Terminological Language is Difficult1
Haimowits, Ira J.; Patil, Ramesh S.; Szolovits, Peter
1988-01-01
We report on an experiment to use a modern knowledge representation language, NIKL, to express the knowledge of a sophisticated medical reasoning program, ABEL. We are attempting to put the development of more capable medical programs on firmer representational grounds by moving from the ad hoc representations typical of current programs toward more principled representation languages now in use or under construction. Our experience with the project reported here suggests caution, however. Attempts at cleanliness and efficiency in the design of representation languages lead to a poverty of expressiveness that makes it difficult if not impossible to say in such languages what needs to be stated to support the application.
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
Efficient processing of fluorescence images using directional multiscale representations
Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.
2017-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Developments toward more accurate molecular modeling of liquids
NASA Astrophysics Data System (ADS)
Evans, Tom J.
2000-12-01
The general goal of this research has been to improve upon existing combined quantum mechanics/molecular mechanics (QM/MM) methodologies. Error weighting functions have been introduced into the perturbative Monte Carlo (PMC) method for use with QM/MM. The PMC approach, introduced earlier, provides a means to reduce the number of full self-consistent field (SCF) calculations in simulations using the QM/MM potential by evoking perturbation theory to calculate energy changes due to displacements of a MM molecule. This will allow the ab initio QM/MM approach to be applied to systems that require more advanced, computationally demanding treatments of the QM and/or MM regions. Efforts have also been made to improve the accuracy of the representation of the solvent molecules usually represented by MM force fields. Results from an investigation of the applicability of the embedded density functional theory (EDFT) for studying physical properties of solutions will be presented. In this approach, the solute wavefunction is solved self- consistently in the field of individually frozen electron-density solvent molecules. To test its accuracy, the potential curves for interactions between Li+, Cl- and H2O with a single frozen-density H 2O molecule in different orientations have been calculated. With the development of the more sophisticated effective fragment potential (EFP) representation of solvent molecules, a QM/EFP technique was created. This hybrid QM/EFP approach was used to investigate the solvation of Li + by small clusters of water, as a test case for larger ionic dusters. The EFP appears to provide an accurate representation of the strong interactions that exist between Li+ and H2O. With the QM/EFP methodology comes an increased computational expense, resulting in an even greater need to rely on the PMC approach. However, while including the PMC into the hybrid QM/EFP technique, it was discovered that the previous implementation of the PMC was done incorrectly, invalidating earlier test results. The PMC implementation was therefore reworked, and tests were performed to investigate the methods usefulness in reducing the computational load of these types of simulations. The results that were obtained while studying F-(H2O) and F-(H 2O)2 show that PMC can be used cautiously to increase computational efficiency.
Incorporating linguistic knowledge for learning distributed word representations.
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.
Incorporating Linguistic Knowledge for Learning Distributed Word Representations
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.
Keil, Andreas
2006-01-01
Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.
Identification of subsurface structures using electromagnetic data and shape priors
NASA Astrophysics Data System (ADS)
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
A scalable method to improve gray matter segmentation at ultra high field MRI.
Gulban, Omer Faruk; Schneider, Marian; Marquardt, Ingo; Haast, Roy A M; De Martino, Federico
2018-01-01
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data.
A scalable method to improve gray matter segmentation at ultra high field MRI
De Martino, Federico
2018-01-01
High-resolution (functional) magnetic resonance imaging (MRI) at ultra high magnetic fields (7 Tesla and above) enables researchers to study how anatomical and functional properties change within the cortical ribbon, along surfaces and across cortical depths. These studies require an accurate delineation of the gray matter ribbon, which often suffers from inclusion of blood vessels, dura mater and other non-brain tissue. Residual segmentation errors are commonly corrected by browsing the data slice-by-slice and manually changing labels. This task becomes increasingly laborious and prone to error at higher resolutions since both work and error scale with the number of voxels. Here we show that many mislabeled, non-brain voxels can be corrected more efficiently and semi-automatically by representing three-dimensional anatomical images using two-dimensional histograms. We propose both a uni-modal (based on first spatial derivative) and multi-modal (based on compositional data analysis) approach to this representation and quantify the benefits in 7 Tesla MRI data of nine volunteers. We present an openly accessible Python implementation of these approaches and demonstrate that editing cortical segmentations using two-dimensional histogram representations as an additional post-processing step aids existing algorithms and yields improved gray matter borders. By making our data and corresponding expert (ground truth) segmentations openly available, we facilitate future efforts to develop and test segmentation algorithms on this challenging type of data. PMID:29874295
Zhang, Long; Jia, Lianyin; Ren, Yazhou
2017-01-01
Protein-protein interactions (PPIs) play crucial roles in almost all cellular processes. Although a large amount of PPIs have been verified by high-throughput techniques in the past decades, currently known PPIs pairs are still far from complete. Furthermore, the wet-lab experiments based techniques for detecting PPIs are time-consuming and expensive. Hence, it is urgent and essential to develop automatic computational methods to efficiently and accurately predict PPIs. In this paper, a sequence-based approach called DNN-LCTD is developed by combining deep neural networks (DNNs) and a novel local conjoint triad description (LCTD) feature representation. LCTD incorporates the advantage of local description and conjoint triad, thus, it is capable to account for the interactions between residues in both continuous and discontinuous regions of amino acid sequences. DNNs can not only learn suitable features from the data by themselves, but also learn and discover hierarchical representations of data. When performing on the PPIs data of Saccharomyces cerevisiae, DNN-LCTD achieves superior performance with accuracy as 93.12%, precision as 93.75%, sensitivity as 93.83%, area under the receiver operating characteristic curve (AUC) as 97.92%, and it only needs 718 s. These results indicate DNN-LCTD is very promising for predicting PPIs. DNN-LCTD can be a useful supplementary tool for future proteomics study. PMID:29117139
Systems Biology Graphical Notation: Activity Flow language Level 1 Version 1.2.
Mi, Huaiyu; Schreiber, Falk; Moodie, Stuart; Czauderna, Tobias; Demir, Emek; Haw, Robin; Luna, Augustin; Le Novère, Nicolas; Sorokin, Anatoly; Villéger, Alice
2015-09-04
The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Activity Flow language represents the influences of activities among various entities within a network. Unlike SBGN PD and ER that focus on the entities and their relationships with others, SBGN AF puts the emphasis on the functions (or activities) performed by the entities, and their effects to the functions of the same or other entities. The nodes (elements) describe the biological activities of the entities, such as protein kinase activity, binding activity or receptor activity, which can be easily mapped to Gene Ontology molecular function terms. The edges (connections) provide descriptions of relationships (or influences) between the activities, e.g., positive influence and negative influence. Among all three languages of SBGN, AF is the closest to signaling pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.
Wang, Jun; Zhang, Long; Jia, Lianyin; Ren, Yazhou; Yu, Guoxian
2017-11-08
Protein-protein interactions (PPIs) play crucial roles in almost all cellular processes. Although a large amount of PPIs have been verified by high-throughput techniques in the past decades, currently known PPIs pairs are still far from complete. Furthermore, the wet-lab experiments based techniques for detecting PPIs are time-consuming and expensive. Hence, it is urgent and essential to develop automatic computational methods to efficiently and accurately predict PPIs. In this paper, a sequence-based approach called DNN-LCTD is developed by combining deep neural networks (DNNs) and a novel local conjoint triad description (LCTD) feature representation. LCTD incorporates the advantage of local description and conjoint triad, thus, it is capable to account for the interactions between residues in both continuous and discontinuous regions of amino acid sequences. DNNs can not only learn suitable features from the data by themselves, but also learn and discover hierarchical representations of data. When performing on the PPIs data of Saccharomyces cerevisiae , DNN-LCTD achieves superior performance with accuracy as 93.12%, precision as 93.75%, sensitivity as 93.83%, area under the receiver operating characteristic curve (AUC) as 97.92%, and it only needs 718 s. These results indicate DNN-LCTD is very promising for predicting PPIs. DNN-LCTD can be a useful supplementary tool for future proteomics study.
Sparse representation and Bayesian detection of genome copy number alterations from microarray data.
Pique-Regi, Roger; Monso-Varona, Jordi; Ortega, Antonio; Seeger, Robert C; Triche, Timothy J; Asgharzadeh, Shahab
2008-02-01
Genomic instability in cancer leads to abnormal genome copy number alterations (CNA) that are associated with the development and behavior of tumors. Advances in microarray technology have allowed for greater resolution in detection of DNA copy number changes (amplifications or deletions) across the genome. However, the increase in number of measured signals and accompanying noise from the array probes present a challenge in accurate and fast identification of breakpoints that define CNA. This article proposes a novel detection technique that exploits the use of piece wise constant (PWC) vectors to represent genome copy number and sparse Bayesian learning (SBL) to detect CNA breakpoints. First, a compact linear algebra representation for the genome copy number is developed from normalized probe intensities. Second, SBL is applied and optimized to infer locations where copy number changes occur. Third, a backward elimination (BE) procedure is used to rank the inferred breakpoints; and a cut-off point can be efficiently adjusted in this procedure to control for the false discovery rate (FDR). The performance of our algorithm is evaluated using simulated and real genome datasets and compared to other existing techniques. Our approach achieves the highest accuracy and lowest FDR while improving computational speed by several orders of magnitude. The proposed algorithm has been developed into a free standing software application (GADA, Genome Alteration Detection Algorithm). http://biron.usc.edu/~piquereg/GADA
Cohen, Adam S; Sasaki, Joni Y; German, Tamsin C
2015-03-01
Does theory of mind depend on a capacity to reason about representations generally or on mechanisms selective for the processing of mental state representations? In four experiments, participants reasoned about beliefs (mental representations) and notes (non-mental, linguistic representations), which according to two prominent theories are closely matched representations because both are represented propositionally. Reaction times were faster and accuracies higher when participants endorsed or rejected statements about false beliefs than about false notes (Experiment 1), even when statements emphasized representational format (Experiment 2), which should have favored the activation of representation concepts. Experiments 3 and 4 ruled out a counterhypothesis that differences in task demands were responsible for the advantage in belief processing. These results demonstrate for the first time that understanding of mental and linguistic representations can be dissociated even though both may carry propositional content, supporting the theory that mechanisms governing theory of mind reasoning are narrowly specialized to process mental states, not representations more broadly. Extending this theory, we discuss whether less efficient processing of non-mental representations may be a by-product of mechanisms specialized for processing mental states. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Implicit Wiener series analysis of epileptic seizure recordings.
Barbero, Alvaro; Franz, Matthias; van Drongelen, Wim; Dorronsoro, José R; Schölkopf, Bernhard; Grosse-Wentrup, Moritz
2009-01-01
Implicit Wiener series are a powerful tool to build Volterra representations of time series with any degree of non-linearity. A natural question is then whether higher order representations yield more useful models. In this work we shall study this question for ECoG data channel relationships in epileptic seizure recordings, considering whether quadratic representations yield more accurate classifiers than linear ones. To do so we first show how to derive statistical information on the Volterra coefficient distribution and how to construct seizure classification patterns over that information. As our results illustrate, a quadratic model seems to provide no advantages over a linear one. Nevertheless, we shall also show that the interpretability of the implicit Wiener series provides insights into the inter-channel relationships of the recordings.
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Gamma-ray transfer and energy deposition in supernovae
NASA Technical Reports Server (NTRS)
Swartz, Douglas A.; Sutherland, Peter G.; Harkness, Robert P.
1995-01-01
Solutions to the energy-independent (gray) radiative transfer equations are compared to results of Monte Carlo simulations of the Ni-56 and Co-56 decay gamma-ray energy deposition in supernovae. The comparison shows that an effective, purely absorptive, gray opacity, kappa(sub gamma) approximately (0. 06 +/- 0.01)Y(sub e) sq cm/g, where Y is the total number of electrons per baryon, accurately describes the interaction of gamma-rays with the cool supernova gas and the local gamma-ray energy deposition within the gas. The nature of the gamma-ray interaction process (dominated by Compton scattering in the relativistic regime) creates a weak dependence of kappa(sub gamma) on the optical thickness of the (spherically symmetric) supernova atmosphere: The maximum value of kappa(sub gamma) applies during optically thick conditions when individual gamma-rays undergo multiple scattering encounters and the lower bound is reached at the phase characterized by a total Thomson optical depth to the center of the atmosphere tau(sub e) approximately less than 1. Gamma-ray deposition for Type Ia supernova models to within 10% for the epoch from maximum light to t = 1200 days. Our results quantitatively confirm that the quick and efficient solution to the gray transfer problem provides an accurate representation of gamma-ray energy deposition for a broad range of supernova conditions.
Hypersonic Shock Wave Computations Using the Generalized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh; Chen, Rui; Cheremisin, Felix G.
2006-11-01
Hypersonic shock structure in diatomic gases is computed by solving the Generalized Boltzmann Equation (GBE), where the internal and translational degrees of freedom are considered in the framework of quantum and classical mechanics respectively [1]. The computational framework available for the standard Boltzmann equation [2] is extended by including both the rotational and vibrational degrees of freedom in the GBE. There are two main difficulties encountered in computation of high Mach number flows of diatomic gases with internal degrees of freedom: (1) a large velocity domain is needed for accurate numerical description of the distribution function resulting in enormous computational effort in calculation of the collision integral, and (2) about 50 energy levels are needed for accurate representation of the rotational spectrum of the gas. Our methodology addresses these problems, and as a result the efficiency of calculations has increased by several orders of magnitude. The code has been validated by computing the shock structure in Nitrogen for Mach numbers up to 25 including the translational and rotational degrees of freedom. [1] Beylich, A., ``An Interlaced System for Nitrogen Gas,'' Proc. of CECAM Workshop, ENS de Lyon, France, 2000. [2] Cheremisin, F., ``Solution of the Boltzmann Kinetic Equation for High Speed Flows of a Rarefied Gas,'' Proc. of the 24th Int. Symp. on Rarefied Gas Dynamics, Bari, Italy, 2004.
NASA Astrophysics Data System (ADS)
Carrera; Valvano; Kulikov
2018-01-01
In this work, a new class of finite elements for the analysis of composite and sandwich shells embedding piezoelectric skins and patches is proposed. The main idea of models coupling is developed by presenting the concept of nodal dependent kinematics where the same finite element can present at each node a different approximation of the main unknowns by setting a node-wise through-the-thickness approximation base. In a global/local approach scenario, the computational costs can be reduced drastically by assuming refined theories only in those zones/nodes of the structural domain where the resulting strain and stress states, and their electro-mechanical coupling present a complex distribution. Several numerical investigations are carried out to validate the accuracy and efficiency of the present shell element. An accurate representation of mechanical stresses and electric displacements in localized zones is possible with reduction of the computational costs if an accurate distribution of the higher-order kinematic capabilities is performed. On the contrary, the accuracy of the solution in terms of mechanical displacements and electric potential values depends on the global approximation over the whole structure. The efficacy of the present node-dependent variable kinematic models, thus, depends on the characteristics of the problem under consideration as well as on the required analysis type.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.
2015-01-01
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704
Lexical Quality and Reading Comprehension in Primary School Children
ERIC Educational Resources Information Center
Richter, Tobias; Isberner, Maj-Britt; Naumann, Johannes; Neeb, Yvonne
2013-01-01
In a cross-sectional study, we examined the relationship between the quality of lexical representations and text comprehension skill in German primary school children (Grades 1-4). We measured the efficiency and accuracy of orthographical, phonological, and meaning representations by means of computerized tests. Text comprehension skill was…
Children's Orthographic Knowledge and Their Word Reading Skill: Testing Bidirectional Relations
ERIC Educational Resources Information Center
Conrad, Nicole J.; Deacon, S. Hélène
2016-01-01
Prominent models of word reading concur that the development of efficient word reading depends on the establishment of lexical orthographic representations in memory. In turn, word reading skills are conceptualised as supporting the development of these orthographic representations. As such, models of word reading development make clear…
Alternative Representations for Algebraic Problem Solving: When Are Graphs Better than Equations?
ERIC Educational Resources Information Center
Mielicki, Marta K.; Wiley, Jennifer
2016-01-01
Successful algebraic problem solving entails adaptability of solution methods using different representations. Prior research has suggested that students are more likely to prefer symbolic solution methods (equations) over graphical ones, even when graphical methods should be more efficient. However, this research has not tested how representation…
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
NASA Astrophysics Data System (ADS)
Luan, Deyu; Zhang, Shengfeng; Wei, Xing; Duan, Zhenya
The aim of this work is to investigate the effect of the shaft eccentricity on the flow field and mixing characteristics in a stirred tank with the novel stirrer composed of perturbed six-bent-bladed turbine (6PBT). The difference between coaxial and eccentric agitations is studied using computational fluid dynamics (CFD) simulations combined with standard k-ε turbulent equations, that offer a complete image of the three-dimensional flow field. In order to determine the capability of CFD to forecast the mixing process, particle image velocimetry (PIV), which provide an accurate representation of the time-averaged velocity, was used to measure fluid velocity. The test liquid used was 1.25% (wt) xanthan gum solution, a pseudoplastic fluid with a yield stress. The comparison of the experimental and simulated mean flow fields has demonstrated that calculations based on Reynolds-averaged Navier-Stokes equations are suitable for obtaining accurate results. The effects of the shaft eccentricity and the stirrer off-bottom distance on the flow model, mixing time and mixing efficiency were extensively analyzed. It is observed that the microstructure of the flow field has a significant effect on the tracer mixing process. The eccentric agitation can lead to the flow model change and the non-symmetric flow structure, which would possess an obvious superiority of mixing behavior. Moreover, the mixing rate and mixing efficiency are dependent on the shaft eccentricity and the stirrer off-bottom distance, showing the corresponding increase of the eccentricity with the off-bottom distance. The efficient mixing process of pseudoplastic fluid stirred by 6PBT impeller is obtained with the considerably low mixing energy per unit volume when the stirrer off-bottom distance, C, is T/3 and the eccentricity, e, is 0.2. The research results provide valuable references for the improvement of pseudoplastic fluid agitation technology.
Computable visually observed phenotype ontological framework for plants
2011-01-01
Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966
NASA Astrophysics Data System (ADS)
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
A coherent discrete variable representation method on a sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua -Gen
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A coherent discrete variable representation method on a sphere
Yu, Hua -Gen
2017-09-05
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
NASA Astrophysics Data System (ADS)
Shi, Min; Niu, Zhong-Ming; Liang, Haozhao
2018-06-01
We have combined the complex momentum representation method with the Green's function method in the relativistic mean-field framework to establish the RMF-CMR-GF approach. This new approach is applied to study the halo structure of 74Ca. All the continuum level density of concerned resonant states are calculated accurately without introducing any unphysical parameters, and they are independent of the choice of integral contour. The important single-particle wave functions and densities for the halo phenomenon in 74Ca are discussed in detail.
1991-07-01
provide poor representations of overdriven detonation. The Jones-Wilkens- Lee-Baker ( JWLB ) has been formulated to provide a more accurate representation...Chapman-Jouguet state. The resulting equation of state form, named Jones-Wilkens-Lee-Baker ( JWLB ), is P. A,[-+ e-R-iV -t-V-4- C(1 V(wl 1 where, ,=L(AAi...is the specific internal energy. The JWLB equation of state form is based on a first order expansion around the principal isentrope: A, .’ie’R iV + CV
The influence of vision, touch, and proprioception on body representation of the lower limbs.
Stone, Kayla D; Keizer, Anouk; Dijkerman, H Chris
2018-04-01
Numerous studies have shown that the representation of the hand is distorted. When participants are asked to localize unseen points on the hand (e.g. the knuckle), it is perceived to be wider and shorter than its physical dimensions. Similar distortions occur when people are asked to judge the distance between two tactile points on the hand; estimates made in the longitudinal direction are perceived as significantly shorter than those made in the transverse direction. Yet, when asked to visually compare the shape and size of one's own hand to a template hand, individuals are accurate at estimating the size of their own hands. Thus, it seems that body representations are, at least in part, a function of the most prominent underlying sensory modality used to perceive the body part. Yet, it remains unknown if the representations of other body parts are similarly distorted. The lower limbs, for example, are structurally and functionally very different from the hands, yet their representation(s) are seldom studied. What does the body representation for the leg look like? And is leg representation dependent on which sense is probed when making judgments about its shape and size? In the current study, we investigated what the representation of the leg looks like in visually-, tactually-, and proprioceptively-guided tasks. Results revealed that the leg, like the hand, is distorted in a highly systematic manner. Distortions seem to rely, at least partly, on sensory input. This is the first study, to our knowledge, to systematically investigate leg representation in healthy individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
NASA Astrophysics Data System (ADS)
Prather, Edward
2018-01-01
Astronomy education researchers in the Department of Astronomy at the University of Arizona have been investigating a new framework for getting students to engage in discussions about fundamental astronomy topics. This framework is intended to also provide students with explicit feedback on the correctness and coherency of their mental models on these topics. This framework builds upon our prior efforts to create productive Pedagogical Discipline Representations (PDR). Students are asked to work collaboratively to generate their own representations (drawings, graphs, data tables, etc.) that reflect important characteristics of astrophysical scenarios presented in class. We have found these representation tasks offer tremendous insight into the broad range of ideas and knowledge students possess after instruction that includes both traditional lecture and actively learning strategies. In particular, we find that some of our students are able to correctly answer challenging multiple-choice questions on topics, however, they struggle to accurately create representations of these same topics themselves. Our work illustrates that some of our students are not developing a robust level of discipline fluency with many core ideas in astronomy, even after engaging with active learning strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.
Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from conceptmore » drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.« less
Problem representation and mathematical problem solving of students of varying math ability.
Krawec, Jennifer L
2014-01-01
The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.
Global, long-term surface reflectance records from Landsat
USDA-ARS?s Scientific Manuscript database
Global, long-term monitoring of changes in Earth’s land surface requires quantitative comparisons of satellite images acquired under widely varying atmospheric conditions. Although physically based estimates of surface reflectance (SR) ultimately provide the most accurate representation of Earth’s s...
Air freight hubs and fuel use.
DOT National Transportation Integrated Search
2014-09-01
The aim of the project is to examine air express/freight to (a) come up with more accurate : representation of the types of active links; (b) convert the links to aircraft movements; (c) make : reasonable estimate of fuel/energy use by fleet operatio...
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
NASA Astrophysics Data System (ADS)
Bilitza, Dieter
2017-04-01
The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.
Responses of somatosensory area 2 neurons to actively and passively generated limb movements
London, Brian M.
2013-01-01
Control of reaching movements requires an accurate estimate of the state of the limb, yet sensory signals are inherently noisy, because of both noise at the receptors themselves and the stochastic nature of the information representation by neural discharge. One way to derive an accurate representation from noisy sensor data is to combine it with the output of a forward model that considers both the previous state estimate and the noisy input. We recorded from primary somatosensory cortex (S1) in macaques (Macaca mulatta) during both active and passive movements to investigate how the proprioceptive representation of movement in S1 may be modified by the motor command (through efference copy). We found neurons in S1 that respond to one or both movement types covering a broad distribution from active movement only, to both, to passive movement only. Those neurons that responded to both active and passive movements responded with similar directional tuning. Confirming earlier results, some, but not all, neurons responded before the onset of volitional movements, possibly as a result of efference copy. Consequently, many of the features necessary to combine the forward model with proprioceptive feedback appear to be present in S1. These features would not be expected from combinations of afferent receptor responses alone. PMID:23274308
Content Representation in the Human Medial Temporal Lobe
Liang, Jackson C.; Wagner, Anthony D.
2013-01-01
Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior–posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior–posterior axis of MTL that depends on the nature of event content. PMID:22275474
Exploring the Complexity of Tree Thinking Expertise in an Undergraduate Systematics Course
ERIC Educational Resources Information Center
Halverson, Kristy L.; Pires, Chris J.; Abell, Sandra K.
2011-01-01
Student understanding of biological representations has not been well studied. Yet, we know that to be efficient problem solvers in evolutionary biology and systematics, college students must develop expertise in thinking with a particular type of representation, phylogenetic trees. The purpose of this study was to understand how undergraduates…
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Mixed semiclassical-classical propagators for the Wigner phase space representation
NASA Astrophysics Data System (ADS)
Koda, Shin-ichi
2016-04-01
We formulate mixed semiclassical-classical (SC-Cl) propagators by adding a further approximation to the phase-space SC propagators, which have been formulated in our previous paper [S. Koda, J. Chem. Phys. 143, 244110 (2015)]. We first show that the stationary phase approximation over the operation of the phase-space van Vleck propagator on initial distribution functions results in the classical mechanical time propagation. Then, after dividing the degrees of freedom (DOFs) of the total system into the semiclassical DOFs and the classical DOFs, the SC-Cl van Vleck propagator and the SC-Cl Herman-Kluk (HK) propagator are derived by performing the stationary phase approximation only with respect to the classical DOFs. These SC-Cl propagators are naturally decomposed to products of the phase-space SC propagators and the classical mechanical propagators when the system does not have any interaction between the semiclassical and the classical DOFs. In addition, we also numerically compare the original phase-space HK (full HK) propagator and the SC-Cl HK propagator in terms of accuracy and efficiency to find that the accuracy of the SC-Cl HK propagator can be comparable to that of the full HK propagator although the latter is more accurate than the former in general. On the other hand, we confirm that the convergence speed of the SC-Cl HK propagator is faster than that of the full HK propagator. The present numerical tests indicate that the SC-Cl HK propagator can be more accurate than the full HK propagator when they use a same and finite number of classical trajectories due to the balance of the accuracy and the efficiency.
Mixed semiclassical-classical propagators for the Wigner phase space representation.
Koda, Shin-Ichi
2016-04-21
We formulate mixed semiclassical-classical (SC-Cl) propagators by adding a further approximation to the phase-space SC propagators, which have been formulated in our previous paper [S. Koda, J. Chem. Phys. 143, 244110 (2015)]. We first show that the stationary phase approximation over the operation of the phase-space van Vleck propagator on initial distribution functions results in the classical mechanical time propagation. Then, after dividing the degrees of freedom (DOFs) of the total system into the semiclassical DOFs and the classical DOFs, the SC-Cl van Vleck propagator and the SC-Cl Herman-Kluk (HK) propagator are derived by performing the stationary phase approximation only with respect to the classical DOFs. These SC-Cl propagators are naturally decomposed to products of the phase-space SC propagators and the classical mechanical propagators when the system does not have any interaction between the semiclassical and the classical DOFs. In addition, we also numerically compare the original phase-space HK (full HK) propagator and the SC-Cl HK propagator in terms of accuracy and efficiency to find that the accuracy of the SC-Cl HK propagator can be comparable to that of the full HK propagator although the latter is more accurate than the former in general. On the other hand, we confirm that the convergence speed of the SC-Cl HK propagator is faster than that of the full HK propagator. The present numerical tests indicate that the SC-Cl HK propagator can be more accurate than the full HK propagator when they use a same and finite number of classical trajectories due to the balance of the accuracy and the efficiency.
The Search for Efficiency in Arboreal Ray Tracing Applications
NASA Astrophysics Data System (ADS)
van Leeuwen, M.; Disney, M.; Chen, J. M.; Gomez-Dans, J.; Kelbe, D.; van Aardt, J. A.; Lewis, P.
2016-12-01
Forest structure significantly impacts a range of abiotic conditions, including humidity and the radiation regime, all of which affect the rate of net and gross primary productivity. Current forest productivity models typically consider abstract media to represent the transfer of radiation within the canopy. Examples include the representation forest structure via a layered canopy model, where leaf area and inclination angles are stratified with canopy depth, or as turbid media where leaves are randomly distributed within space or within confined geometric solids such as blocks, spheres or cones. While these abstract models are known to produce accurate estimates of primary productivity at the stand level, their limited geometric resolution restricts applicability at fine spatial scales, such as the cell, leaf or shoot levels, thereby not addressing the full potential of assimilation of data from laboratory and field measurements with that of remote sensing technology. Recent research efforts have explored the use of laser scanning to capture detailed tree morphology at millimeter accuracy. These data can subsequently be used to combine ray tracing with primary productivity models, providing an ability to explore trade-offs among different morphological traits or assimilate data from spatial scales, spanning the leaf- to the stand level. Ray tracing has a major advantage of allowing the most accurate structural description of the canopy, and can directly exploit new 3D structural measurements, e.g., from laser scanning. However, the biggest limitation of ray tracing models is their high computational cost, which currently limits their use for large-scale applications. In this talk, we explore ways to more efficiently exploit ray tracing simulations and capture this information in a readily computable form for future evaluation, thus potentially enabling large-scale first-principles forest growth modelling applications.
Representation of Ion–Protein Interactions Using the Drude Polarizable Force-Field
2016-01-01
Small metal ions play critical roles in numerous biological processes. Of particular interest is how metalloenzymes are allosterically regulated by the binding of specific ions. Understanding how ion binding affects these biological processes requires atomic models that accurately treat the microscopic interactions with the protein ligands. Theoretical approaches at different levels of sophistication can contribute to a deeper understanding of these systems, although computational models must strike a balance between accuracy and efficiency in order to enable long molecular dynamics simulations. In this study, we present a systematic effort to optimize the parameters of a polarizable force field based on classical Drude oscillators to accurately represent the interactions between ions (K+, Na+, Ca2+, and Cl–) and coordinating amino-acid residues for a set of 30 biologically important proteins. By combining ab initio calculations and experimental thermodynamic data, we derive a polarizable force field that is consistent with a wide range of properties, including the geometries and interaction energies of gas-phase ion/protein-like model compound clusters, and the experimental solvation free-energies of the cations in liquids. The resulting models display significant improvements relative to the fixed-atomic-charge additive CHARMM C36 force field, particularly in their ability to reproduce the many-body electrostatic nonadditivity effects estimated from ab initio calculations. The analysis clarifies the fundamental limitations of the pairwise additivity assumption inherent in classical fixed-charge force fields, and shows its dramatic failures in the case of Ca2+ binding sites. These optimized polarizable models, amenable to computationally efficient large-scale MD simulations, set a firm foundation and offer a powerful avenue to study the roles of the ions in soluble and membrane transport proteins. PMID:25578354
A k-space method for large-scale models of wave propagation in tissue.
Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C
2001-03-01
Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.
Grouper: A Compact, Streamable Triangle Mesh Data Structure.
Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek
2013-05-08
We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.
Readmission prediction via deep contextual embedding of clinical concepts.
Xiao, Cao; Ma, Tengfei; Dieng, Adji B; Blei, David M; Wang, Fei
2018-01-01
Hospital readmission costs a lot of money every year. Many hospital readmissions are avoidable, and excessive hospital readmissions could also be harmful to the patients. Accurate prediction of hospital readmission can effectively help reduce the readmission risk. However, the complex relationship between readmission and potential risk factors makes readmission prediction a difficult task. The main goal of this paper is to explore deep learning models to distill such complex relationships and make accurate predictions. We propose CONTENT, a deep model that predicts hospital readmissions via learning interpretable patient representations by capturing both local and global contexts from patient Electronic Health Records (EHR) through a hybrid Topic Recurrent Neural Network (TopicRNN) model. The experiment was conducted using the EHR of a real world Congestive Heart Failure (CHF) cohort of 5,393 patients. The proposed model outperforms state-of-the-art methods in readmission prediction (e.g. 0.6103 ± 0.0130 vs. second best 0.5998 ± 0.0124 in terms of ROC-AUC). The derived patient representations were further utilized for patient phenotyping. The learned phenotypes provide more precise understanding of readmission risks. Embedding both local and global context in patient representation not only improves prediction performance, but also brings interpretable insights of understanding readmission risks for heterogeneous chronic clinical conditions. This is the first of its kind model that integrates the power of both conventional deep neural network and the probabilistic generative models for highly interpretable deep patient representation learning. Experimental results and case studies demonstrate the improved performance and interpretability of the model.
Efficient Type Representation in TAL
NASA Technical Reports Server (NTRS)
Chen, Juan
2009-01-01
Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.
Representing urban terrain characteristics in mesoscale meteorological and dispersion models is critical to produce accurate predictions of wind flow and temperature fields, air quality, and contaminant transport. A key component of the urban terrain representation is the charac...
Cumulative distribution functions and their use in monitoring programs
Ecological resource monitoring programs typically have estimating the status and change in status as an objective. A well designed and skillfully implemented survey design will produce an accurate representation of the status of the resource at the time the survey was conducted....
Heart-rate pulse-shift detector
NASA Technical Reports Server (NTRS)
Anderson, M.
1974-01-01
Detector circuit accurately separates and counts phase-shift pulses over wide range of basic pulse-rate frequency, and also provides reasonable representation of full repetitive EKG waveform. Single telemeter implanted in small animal monitors not only body temperature but also animal movement and heart rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Dejun, E-mail: dejun.lin@gmail.com
2015-09-21
Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less
Orbital dependent functionals: An atom projector augmented wave method implementation
NASA Astrophysics Data System (ADS)
Xu, Xiao
This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
Using Generative Representations to Evolve Robots. Chapter 1
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2004-01-01
Recent research has demonstrated the ability of evolutionary algorithms to automatically design both the physical structure and software controller of real physical robots. One of the challenges for these automated design systems is to improve their ability to scale to the high complexities found in real-world problems. Here we claim that for automated design systems to scale in complexity they must use a representation which allows for the hierarchical creation and reuse of modules, which we call a generative representation. Not only is the ability to reuse modules necessary for functional scalability, but it is also valuable for improving efficiency in testing and construction. We then describe an evolutionary design system with a generative representation capable of hierarchical modularity and demonstrate it for the design of locomoting robots in simulation. Finally, results from our experiments show that evolution with our generative representation produces better robots than those evolved with a non-generative representation.
NASA Astrophysics Data System (ADS)
Maliavkin, G. P.; Shmyrov, A. S.; Shmyrov, V. A.
2018-05-01
Vicinities of collinear libration points of the Sun-Earth system are currently quite attractive for the space navigation. Today, various projects on placing of spacecrafts observing the Sun in the L1 libration point and telescopes in L2 have been implemented (e.g. spacecrafts "WIND", "SOHO", "Herschel", "Planck"). Collinear libration points being unstable leads to the problem of stabilization of a spacecraft's motion. Laws of stabilizing motion control in vicinity of L1 point can be constructed using the analytical representation of a stable invariant manifold. Efficiency of these control laws depends on the precision of the representation. Within the model of Hill's approximation of the circular restricted three-body problem in the rotating geocentric coordinate system one can obtain the analytical representation of an invariant manifold filled with bounded trajectories in a form of series in terms of powers of the phase variables. Approximate representations of the orders from the first to the fourth inclusive can be used to construct four laws of stabilizing feedback motion control under which trajectories approach the manifold. By virtue of numerical simulation the comparison can be made: how the precision of the representation of the invariant manifold influences the efficiency of the control, expressed by energy consumptions (characteristic velocity). It shows that using approximations of higher orders in constructing the control laws can significantly reduce the energy consumptions on implementing the control compared to the linear approximation.
On-board ephemeris representation for Topex/Poseidon
NASA Technical Reports Server (NTRS)
Salama, Ahmed H.
1990-01-01
The Topex/Poseidon satellite requires real-time on-board knowledge of the satellite and TDRS ephemeris for attitude determination and control and High-Gain Antenna (HGA) pointing. The ephemeris representation concept for the MMS (Multimission Modular Spacecraft) satellites has shown that compressing the predicted ephemeris in a Fourier Power Series (FPS) before uplinking in conjunction with the On-Board Computer (OBC) ephemeris reconstruction algorithms is an efficient technique for ephemeris representation. As an MMS-based satellite, Topex/Poseidon has inherited the Landsat ephemeris representation concept including a daily FPS upload. This paper presents the Topex/Poseidon concept, analysis, and results including the conclusion that the ephemeris representation duration could be extended to 10 days or more and convenient weekly uploading is adopted without an increase in OBC memory requirements.
A path-oriented matrix-based knowledge representation system
NASA Technical Reports Server (NTRS)
Feyock, Stefan; Karamouzis, Stamos T.
1993-01-01
Experience has shown that designing a good representation is often the key to turning hard problems into simple ones. Most AI (Artificial Intelligence) search/representation techniques are oriented toward an infinite domain of objects and arbitrary relations among them. In reality much of what needs to be represented in AI can be expressed using a finite domain and unary or binary predicates. Well-known vector- and matrix-based representations can efficiently represent finite domains and unary/binary predicates, and allow effective extraction of path information by generalized transitive closure/path matrix computations. In order to avoid space limitations a set of abstract sparse matrix data types was developed along with a set of operations on them. This representation forms the basis of an intelligent information system for representing and manipulating relational data.
Overcomplete compact representation of two-particle Green's functions
NASA Astrophysics Data System (ADS)
Shinaoka, Hiroshi; Otsuki, Junya; Haule, Kristjan; Wallerberger, Markus; Gull, Emanuel; Yoshimi, Kazuyoshi; Ohzeki, Masayuki
2018-05-01
Two-particle Green's functions and the vertex functions play a critical role in theoretical frameworks for describing strongly correlated electron systems. However, numerical calculations at the two-particle level often suffer from large computation time and massive memory consumption. We derive a general expansion formula for the two-particle Green's functions in terms of an overcomplete representation based on the recently proposed "intermediate representation" basis. The expansion formula is obtained by decomposing the spectral representation of the two-particle Green's function. We demonstrate that the expansion coefficients decay exponentially, while all high-frequency and long-tail structures in the Matsubara-frequency domain are retained. This representation therefore enables efficient treatment of two-particle quantities and opens a route to the application of modern many-body theories to realistic strongly correlated electron systems.
Spatiotemporal dynamics of similarity-based neural representations of facial identity.
Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene
2017-01-10
Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
The influence of signal type on the internal auditory representation of a room
NASA Astrophysics Data System (ADS)
Teret, Elizabeth
Currently, architectural acousticians make no real distinction between a room impulse response and the auditory system's internal representation of a room. With this lack of a good model for the auditory representation of a room, it is indirectly assumed that our internal representation of a room is independent of the sound source needed to make the room characteristics audible. The extent to which this assumption holds true is examined with perceptual tests. Listeners are presented with various pairs of signals (music, speech, and noise) convolved with synthesized impulse responses of different reverberation times. They are asked to adjust the reverberation of one of the signals to match the other. Analysis of the data show that the source signal significantly influences perceived reverberance. Listeners are less accurate when matching reverberation times of varied signals than they are with identical signals. Additional testing shows that perception of reverberation can be linked to the existence of transients in the signal.
Computational models of location-invariant orthographic processing
NASA Astrophysics Data System (ADS)
Dandurand, Frédéric; Hannagan, Thomas; Grainger, Jonathan
2013-03-01
We trained three topologies of backpropagation neural networks to discriminate 2000 words (lexical representations) presented at different positions of a horizontal letter array. The first topology (zero-deck) contains no hidden layer, the second (one-deck) has a single hidden layer, and for the last topology (two-deck), the task is divided in two subtasks implemented as two stacked neural networks, with explicit word-centred letters as intermediate representations. All topologies successfully simulated two key benchmark phenomena observed in skilled human reading: transposed-letter priming and relative-position priming. However, the two-deck topology most accurately simulated the ability to discriminate words from nonwords, while containing the fewest connection weights. We analysed the internal representations after training. Zero-deck networks implement a letter-based scheme with a position bias to differentiate anagrams. One-deck networks implement a holographic overlap coding in which representations are essentially letter-based and words are linear combinations of letters. Two-deck networks also implement holographic-coding.
AND/OR graph representation of assembly plans
NASA Astrophysics Data System (ADS)
Homem de Mello, Luiz S.; Sanderson, Arthur C.
1990-04-01
A compact representation of all possible assembly plans of a product using AND/OR graphs is presented as a basis for efficient planning algorithms that allow an intelligent robot to pick a course of action according to instantaneous conditions. The AND/OR graph is equivalent to a state transition graph but requires fewer nodes and simplifies the search for feasible plans. Three applications are discussed: (1) the preselection of the best assembly plan, (2) the recovery from execution errors, and (3) the opportunistic scheduling of tasks. An example of an assembly with four parts illustrates the use of the AND/OR graph representation in assembly-plan preselection, based on the weighting of operations according to complexity of manipulation and stability of subassemblies. A hypothetical error situation is discussed to show how a bottom-up search of the AND/OR graph leads to an efficient recovery.
AND/OR graph representation of assembly plans
NASA Technical Reports Server (NTRS)
Homem De Mello, Luiz S.; Sanderson, Arthur C.
1990-01-01
A compact representation of all possible assembly plans of a product using AND/OR graphs is presented as a basis for efficient planning algorithms that allow an intelligent robot to pick a course of action according to instantaneous conditions. The AND/OR graph is equivalent to a state transition graph but requires fewer nodes and simplifies the search for feasible plans. Three applications are discussed: (1) the preselection of the best assembly plan, (2) the recovery from execution errors, and (3) the opportunistic scheduling of tasks. An example of an assembly with four parts illustrates the use of the AND/OR graph representation in assembly-plan preselection, based on the weighting of operations according to complexity of manipulation and stability of subassemblies. A hypothetical error situation is discussed to show how a bottom-up search of the AND/OR graph leads to an efficient recovery.
Novel transform for image description and compression with implementation by neural architectures
NASA Astrophysics Data System (ADS)
Ben-Arie, Jezekiel; Rao, Raghunath K.
1991-10-01
A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Protein–protein docking by fast generalized Fourier transforms on 5D rotational manifolds
Padhorny, Dzmitry; Kazennov, Andrey; Zerbe, Brandon S.; Porter, Kathryn A.; Xia, Bing; Mottarella, Scott E.; Kholodov, Yaroslav; Ritchie, David W.; Vajda, Sandor; Kozakov, Dima
2016-01-01
Energy evaluation using fast Fourier transforms (FFTs) enables sampling billions of putative complex structures and hence revolutionized rigid protein–protein docking. However, in current methods, efficient acceleration is achieved only in either the translational or the rotational subspace. Developing an efficient and accurate docking method that expands FFT-based sampling to five rotational coordinates is an extensively studied but still unsolved problem. The algorithm presented here retains the accuracy of earlier methods but yields at least 10-fold speedup. The improvement is due to two innovations. First, the search space is treated as the product manifold SO(3)×(SO(3)∖S1), where SO(3) is the rotation group representing the space of the rotating ligand, and (SO(3)∖S1) is the space spanned by the two Euler angles that define the orientation of the vector from the center of the fixed receptor toward the center of the ligand. This representation enables the use of efficient FFT methods developed for SO(3). Second, we select the centers of highly populated clusters of docked structures, rather than the lowest energy conformations, as predictions of the complex, and hence there is no need for very high accuracy in energy evaluation. Therefore, it is sufficient to use a limited number of spherical basis functions in the Fourier space, which increases the efficiency of sampling while retaining the accuracy of docking results. A major advantage of the method is that, in contrast to classical approaches, increasing the number of correlation function terms is computationally inexpensive, which enables using complex energy functions for scoring. PMID:27412858
DOT National Transportation Integrated Search
1980-06-01
The purpose of this report is to provide the tunneling profession with improved practical tools in the technical or design area, which provide more accurate representations of the ground-structure interaction in tunneling. The design methods range fr...
12 CFR 621.14 - Certification of correctness.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REQUIREMENTS Report of Condition and Performance § 621.14 Certification of correctness. Each report of financial condition and performance filed with the Farm Credit Administration shall be certified as having... accurate representation of the financial condition and performance of the institution to which it applies...
Verification of KAM Theory on Earth Orbiting Satellites
2010-03-01
9 2.2 The Two Body Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Geocentric and Geographic...Center of Earth Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Geocentric Latitude...their gravitational fields a different approach must be used. For the moment the above representation is sufficient, but a more accurate model will be
Latent heat sink in soil heat flux measurements
USDA-ARS?s Scientific Manuscript database
The surface energy balance includes a term for soil heat flux. Soil heat flux is difficult to measure because it includes conduction and convection heat transfer processes. Accurate representation of soil heat flux is an important consideration in many modeling and measurement applications. Yet, the...
Latent Heat in Soil Heat Flux Measurements
USDA-ARS?s Scientific Manuscript database
The surface energy balance includes a term for soil heat flux. Soil heat flux is difficult to measure because it includes conduction and convection heat transfer processes. Accurate representation of soil heat flux is an important consideration in many modeling and measurement applications. Yet, the...
Conflicting Rationalities, Knowledge and Values in Scarred Landscapes
ERIC Educational Resources Information Center
Collier, Marcus J.; Scott, Mark
2009-01-01
Incorporating public or local preferences in landscape planning is often discussed with respect to the difficulties associated with accurate representation, stimulating interest and overcoming barriers to participation. Incorporating sectoral and professional preferences may also have the same degree of difficulty where conflicts can arise.…
Diamond Head Revisited with Ammonium Dichromate.
ERIC Educational Resources Information Center
Arrigoni, Edward
1981-01-01
The classroom demonstration using ammonium dichromate to simulate a volcanic eruption can be modified into a more dramatic and accurate representation of the geologic processes involved in the formation of a volcanic crater. The materials, demonstration setup, safety procedures, and applications to instruction are presented. (Author/WB)
Use of Rare Earth Elements in investigations of aeolian processes
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
NASA Astrophysics Data System (ADS)
Dalguer, L. A.; Day, S. M.
2006-12-01
Accuracy in finite difference (FD) solutions to spontaneous rupture problems is controlled principally by the scheme used to represent the fault discontinuity, and not by the grid geometry used to represent the continuum. We have numerically tested three fault representation methods, the Thick Fault (TF) proposed by Madariaga et al (1998), the Stress Glut (SG) described by Andrews (1999), and the Staggered-Grid Split-Node (SGSN) methods proposed by Dalguer and Day (2006), each implemented in a the fourth-order velocity-stress staggered-grid (VSSG) FD scheme. The TF and the SG methods approximate the discontinuity through inelastic increments to stress components ("inelastic-zone" schemes) at a set of stress grid points taken to lie on the fault plane. With this type of scheme, the fault surface is indistinguishable from an inelastic zone with a thickness given by a spatial step dx for the SG, and 2dx for the TF model. The SGSN method uses the traction-at-split-node (TSN) approach adapted to the VSSG FD. This method represents the fault discontinuity by explicitly incorporating discontinuity terms at velocity nodes in the grid, with interactions between the "split nodes" occurring exclusively through the tractions (frictional resistance) acting between them. These tractions in turn are controlled by the jump conditions and a friction law. Our 3D tests problem solutions show that the inelastic-zone TF and SG methods show much poorer performance than does the SGSN formulation. The SG inelastic-zone method achieved solutions that are qualitatively meaningful and quantitatively reliable to within a few percent. The TF inelastic-zone method did not achieve qualitatively agreement with the reference solutions to the 3D test problem, and proved to be sufficiently computationally inefficient that it was not feasible to explore convergence quantitatively. The SGSN method gives very accurate solutions, and is also very efficient. Reliable solution of the rupture time is reached with a median resolution of the cohesive zone of only ~2 grid points, and efficiency is competitive with the Boundary Integral (BI) method. The results presented here demonstrate that appropriate fault representation in a numerical scheme is crucial to reduce uncertainties in numerical simulations of earthquake source dynamics and ground motion, and therefore important to improving our understanding of earthquake physics in general.
Yin, Xiu-xing; Lin, Yong-gang; Li, Wei; Liu, Hong-wei; Gu, Ya-jing
2015-09-01
A variable-displacement pump controlled pitch system is proposed to mitigate generator power and flap-wise load fluctuations for wind turbines. The pitch system mainly consists of a variable-displacement hydraulic pump, a fixed-displacement hydraulic motor and a gear set. The hydraulic motor can be accurately regulated by controlling the pump displacement and fluid flows to change the pitch angle through the gear set. The detailed mathematical representation and dynamic characteristics of the proposed pitch system are thoroughly analyzed. An adaptive sliding mode pump displacement controller and a back-stepping stroke piston controller are designed for the proposed pitch system such that the resulting pitch angle tracks its desired value regardless of external disturbances and uncertainties. The effectiveness and control efficiency of the proposed pitch system and controllers have been verified by using realistic dataset of a 750 kW research wind turbine. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Approximate inference on planar graphs using loop calculus and belief progagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Gomez, Vicenc; Kappen, Hilbert
We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006b) allows to express the exact partition function Z of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in Chertkov et al. (2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyzemore » in detail both the loop series and the Pfaffian series for models with binary variables and pairwise interactions, and show that the first term of the Pfaffian series can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.« less
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Using Neural Networks to Describe Tracer Correlations
NASA Technical Reports Server (NTRS)
Lary, D. J.; Mueller, M. D.; Mussa, H. Y.
2003-01-01
Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.
NASA Astrophysics Data System (ADS)
Lemordant, Léo.; Gentine, Pierre; Stéfanon, Marc; Drobinski, Philippe; Fatichi, Simone
2016-10-01
Plant stomata couple the energy, water, and carbon cycles. We use the framework of Regional Climate Modeling to simulate the 2003 European heat wave and assess how higher levels of surface CO2 may affect such an extreme event through land-atmosphere interactions. Increased CO2 modifies the seasonality of the water cycle through stomatal regulation and increased leaf area. As a result, the water saved during the growing season through higher water use efficiency mitigates summer dryness and the heat wave impact. Land-atmosphere interactions and CO2 fertilization together synergistically contribute to increased summer transpiration. This, in turn, alters the surface energy budget and decreases sensible heat flux, mitigating air temperature rise. Accurate representation of the response to higher CO2 levels and of the coupling between the carbon and water cycles is therefore critical to forecasting seasonal climate, water cycle dynamics, and to enhance the accuracy of extreme event prediction under future climate.
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
Infrared target tracking via weighted correlation filter
NASA Astrophysics Data System (ADS)
He, Yu-Jie; Li, Min; Zhang, JinLi; Yao, Jun-Ping
2015-11-01
Design of an effective target tracker is an important and challenging task for many applications due to multiple factors which can cause disturbance in infrared video sequences. In this paper, an infrared target tracking method under tracking by detection framework based on a weighted correlation filter is presented. This method consists of two parts: detection and filtering. For the detection stage, we propose a sequential detection method for the infrared target based on low-rank representation. For the filtering stage, a new multi-feature weighted function which fuses different target features is proposed, which takes the importance of the different regions into consideration. The weighted function is then incorporated into a correlation filter to compute a confidence map more accurately, in order to indicate the best target location based on the detection results obtained from the first stage. Extensive experimental results on different video sequences demonstrate that the proposed method performs favorably for detection and tracking compared with baseline methods in terms of efficiency and accuracy.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2000-01-01
A new, widely applicable model for local interfacial debonding in composite materials is presented. Unlike its direct predecessors, the new model allows debonding to progress via unloading of interfacial stresses even as global loading of the composite continues. Previous debonding models employed for analysis of titanium matrix composites are surpassed by the accuracy, simplicity, and efficiency demonstrated by the new model. The new model was designed to operate seamlessly within NASA Glenn's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), which was employed to simulate the time- and rate-dependent (viscoplastic) transverse tensile and creep behavior of SiC/Ti composites. MAC/GMC's ability to simulate the transverse behavior of titanium matrix composites has been significantly improved by the new debonding model. Further, results indicate the need for a more accurate constitutive representation of the titanium matrix behavior in order to enable predictions of the composite transverse response, without resorting to recalibration of the debonding model parameters.
Using Deep Learning for Compound Selectivity Prediction.
Zhang, Ruisheng; Li, Juan; Lu, Jingjing; Hu, Rongjing; Yuan, Yongna; Zhao, Zhili
2016-01-01
Compound selectivity prediction plays an important role in identifying potential compounds that bind to the target of interest with high affinity. However, there is still short of efficient and accurate computational approaches to analyze and predict compound selectivity. In this paper, we propose two methods to improve the compound selectivity prediction. We employ an improved multitask learning method in Neural Networks (NNs), which not only incorporates both activity and selectivity for other targets, but also uses a probabilistic classifier with a logistic regression. We further improve the compound selectivity prediction by using the multitask learning method in Deep Belief Networks (DBNs) which can build a distributed representation model and improve the generalization of the shared tasks. In addition, we assign different weights to the auxiliary tasks that are related to the primary selectivity prediction task. In contrast to other related work, our methods greatly improve the accuracy of the compound selectivity prediction, in particular, using the multitask learning in DBNs with modified weights obtains the best performance.
A spectral water index based on visual bands
NASA Astrophysics Data System (ADS)
Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed
2013-10-01
Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
A SPECT system simulator built on the SolidWorks TM 3D-Design package.
Li, Xin; Furenlid, Lars R
2014-08-17
We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design workflow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorks TM -created stereolithography (.STL) representations with a full complement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorks TM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.
Machine learning strategy for accelerated design of polymer dielectrics
Mannodi-Kanakkithodi, Arun; Pilania, Ghanshyam; Huan, Tran Doan; ...
2016-02-15
The ability to efficiently design new and advanced dielectric polymers is hampered by the lack of sufficient, reliable data on wide polymer chemical spaces, and the difficulty of generating such data given time and computational/experimental constraints. Here, we address the issue of accelerating polymer dielectrics design by extracting learning models from data generated by accurate state-of-the-art first principles computations for polymers occupying an important part of the chemical subspace. The polymers are ‘fingerprinted’ as simple, easily attainable numerical representations, which are mapped to the properties of interest using a machine learning algorithm to develop an on-demand property prediction model. Further,more » a genetic algorithm is utilised to optimise polymer constituent blocks in an evolutionary manner, thus directly leading to the design of polymers with given target properties. Furthermore, while this philosophy of learning to make instant predictions and design is demonstrated here for the example of polymer dielectrics, it is equally applicable to other classes of materials as well.« less
A SPECT system simulator built on the SolidWorksTM 3D design package
NASA Astrophysics Data System (ADS)
Li, Xin; Furenlid, Lars R.
2014-09-01
We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design work flow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full com- plement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.
Automated Proton Track Identification in MicroBooNE Using Gradient Boosted Decision Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodruff, Katherine
MicroBooNE is a liquid argon time projection chamber (LArTPC) neutrino experiment that is currently running in the Booster Neutrino Beam at Fermilab. LArTPC technology allows for high-resolution, three-dimensional representations of neutrino interactions. A wide variety of software tools for automated reconstruction and selection of particle tracks in LArTPCs are actively being developed. Short, isolated proton tracks, the signal for low- momentum-transfer neutral current (NC) elastic events, are easily hidden in a large cosmic background. Detecting these low-energy tracks will allow us to probe interesting regions of the proton's spin structure. An effective method for selecting NC elastic events is tomore » combine a highly efficient track reconstruction algorithm to find all candidate tracks with highly accurate particle identification using a machine learning algorithm. We present our work on particle track classification using gradient tree boosting software (XGBoost) and the performance on simulated neutrino data.« less
Pan, Han; Jing, Zhongliang; Qiao, Lingfeng; Li, Minzhe
2017-09-25
Image restoration is a difficult and challenging problem in various imaging applications. However, despite of the benefits of a single overcomplete dictionary, there are still several challenges for capturing the geometric structure of image of interest. To more accurately represent the local structures of the underlying signals, we propose a new problem formulation for sparse representation with block-orthogonal constraint. There are three contributions. First, a framework for discriminative structured dictionary learning is proposed, which leads to a smooth manifold structure and quotient search spaces. Second, an alternating minimization scheme is proposed after taking both the cost function and the constraints into account. This is achieved by iteratively alternating between updating the block structure of the dictionary defined on Grassmann manifold and sparsifying the dictionary atoms automatically. Third, Riemannian conjugate gradient is considered to track local subspaces efficiently with a convergence guarantee. Extensive experiments on various datasets demonstrate that the proposed method outperforms the state-of-the-art methods on the removal of mixed Gaussian-impulse noise.
Interoceptive awareness moderates neural activity during decision-making.
Werner, Natalie S; Schweitzer, Nicola; Meindl, Thomas; Duschek, Stefan; Kambeitz, Joseph; Schandry, Rainer
2013-12-01
The current study examined the relationship between conscious perception of somatic feedback (interoceptive awareness) and neural responses preceding decision-making. Previous research has suggested that decision-making is influenced by body signals from the periphery or the central representation of the periphery. Using event-related fMRI, participants whose interoceptive awareness was assessed using a heartbeat perception paradigm performed the Iowa Gambling Task. The results show a positive relationship between the degree of interoceptive awareness and selection related activity in the right anterior insula and the left postcentral gyrus. Neural activity within the right anterior insula was associated with decision-making performance only in individuals with accurate but not in those with non-accurate interoceptive awareness. These findings support the role of somatic feedback in decision-making processes. They indicate that the right anterior insula holds a representation of somatic markers and that these are more strongly processed with increased interoceptive awareness. Copyright © 2013 Elsevier B.V. All rights reserved.
McCarthy, J. Daniel; Barnes, Lianne N.; Alvarez, Bryan D.; Caplovitz, Gideon Paul
2013-01-01
In grapheme-color synesthesia, graphemes (e.g., numbers or letters) evoke color experiences. It is generally reported that the opposite is not true: colors will not generate experiences of graphemes or their associated information. However, recent research has provided evidence that colors can implicitly elicit symbolic representations of associated graphemes. Here, we examine if these representations can be cognitively accessed. Using a mathematical verification task replacing graphemes with color patches, we find that synesthetes can verify such problems with colors as accurately as with graphemes. Doing so, however, takes time: ~250ms per color. Moreover, we find minimal reaction time switch-costs for switching between computing with graphemes and colors. This demonstrates that given specific task demands, synesthetes can cognitively access numerical information elicited by physical colors, and they do so as accurately as with graphemes. We discuss these results in the context of possible cognitive strategies used to access the information. PMID:24100131
High-order space charge effects using automatic differentiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996
1997-02-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less
Computer-Based Learning: Interleaving Whole and Sectional Representation of Neuroanatomy
ERIC Educational Resources Information Center
Pani, John R.; Chariker, Julia H.; Naaz, Farah
2013-01-01
The large volume of material to be learned in biomedical disciplines requires optimizing the efficiency of instruction. In prior work with computer-based instruction of neuroanatomy, it was relatively efficient for learners to master whole anatomy and then transfer to learning sectional anatomy. It may, however, be more efficient to continuously…
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon. PMID:26061046
eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration
Wu, Guorong; Peng, Xuewei; Ying, Shihui; Wang, Qian; Yap, Pew-Thian; Shen, Dan; Shen, Dinggang
2016-01-01
Effective and efficient spatial normalization of a large population of brain images is critical for many clinical and research studies, but it is technically very challenging. A commonly used approach is to choose a certain image as the template and then align all other images in the population to this template by applying pairwise registration. To avoid the potential bias induced by the inappropriate template selection, groupwise registration methods have been proposed to simultaneously register all images to a latent common space. However, current groupwise registration methods do not make full use of image distribution information for more accurate registration. In this paper, we present a novel groupwise registration method that harnesses the image distribution information by capturing the image distribution manifold using a hierarchical graph with its nodes representing the individual images. More specifically, a low-level graph describes the image distribution in each subgroup, and a high-level graph encodes the relationship between representative images of subgroups. Given the graph representation, we can register all images to the common space by dynamically shrinking the graph on the image manifold. The topology of the entire image distribution is always maintained during graph shrinkage. Evaluations on two datasets, one for 80 elderly individuals and one for 285 infants, indicate that our method can yield promising results. PMID:26800361
Lawson, Rebecca
2014-02-01
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, Steven M.
2001-01-01
Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.
Yan, Zheping; Xu, Da; Chen, Tao; Zhang, Wei; Liu, Yibo
2018-01-01
Unmanned underwater vehicles (UUVs) have rapidly developed as mobile sensor networks recently in the investigation, survey, and exploration of the underwater environment. The goal of this paper is to develop a practical and efficient formation control method to improve work efficiency of multi-UUV sensor networks. Distributed leader-follower formation controllers are designed based on a state feedback and consensus algorithm. Considering that each vehicle is subject to model uncertainties and current disturbances, a second-order integral UUV model with a nonlinear function is established using the state feedback linearized method under current disturbances. For unstable communication among UUVs, communication failure and acoustic link noise interference are considered. Two-layer random switching communication topologies are proposed to solve the problem of communication failure. For acoustic link noise interference, accurate representation of valid communication information and noise stripping when designing controllers is necessary. Effective communication topology weights are designed to represent the validity of communication information interfered by noise. Utilizing state feedback and noise stripping, sufficient conditions for design formation controllers are proposed to ensure UUV formation achieves consensus under model uncertainties, current disturbances, and unstable communication. The stability of formation controllers is proven by the Lyapunov-Razumikhin theorem, and the validity is verified by simulation results. PMID:29473919
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
Galaxy halo expansions: a new biorthogonal family of potential-density pairs
NASA Astrophysics Data System (ADS)
Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn; Erkal, Denis
2018-05-01
Efficient expansions of the gravitational field of (dark) haloes have two main uses in the modelling of galaxies: first, they provide a compact representation of numerically constructed (or real) cosmological haloes, incorporating the effects of triaxiality, lopsidedness or other distortion. Secondly, they provide the basis functions for self-consistent field expansion algorithms used in the evolution of N-body systems. We present a new family of biorthogonal potential-density pairs constructed using the Hankel transform of the Laguerre polynomials. The lowest order density basis functions are double-power-law profiles cusped like ρ ˜ r-2+1/α at small radii with asymptotic density fall-off like ρ ˜ r-3-1/(2α). Here, α is a parameter satisfying α ≥ 1/2. The family therefore spans the range of inner density cusps found in numerical simulations, but has much shallower - and hence more realistic - outer slopes than the corresponding members of the only previously known family deduced by Zhao and exemplified by Hernquist & Ostriker. When α = 1, the lowest order density profile has an inner density cusp of ρ ˜ r-1 and an outer density slope of ρ ˜ r-3.5, similar to the famous Navarro, Frenk & White (NFW) model. For this reason, we demonstrate that our new expansion provides a more accurate representation of flattened NFW haloes than the competing Hernquist-Ostriker expansion. We utilize our new expansion by analysing a suite of numerically constructed haloes and providing the distributions of the expansion coefficients.
Zhu, Xiaolei; Yarkony, David R
2016-01-28
We have recently introduced a diabatization scheme, which simultaneously fits and diabatizes adiabatic ab initio electronic wave functions, Zhu and Yarkony J. Chem. Phys. 140, 024112 (2014). The algorithm uses derivative couplings in the defining equations for the diabatic Hamiltonian, H(d), and fits all its matrix elements simultaneously to adiabatic state data. This procedure ultimately provides an accurate, quantifiably diabatic, representation of the adiabatic electronic structure data. However, optimizing the large number of nonlinear parameters in the basis functions and adjusting the number and kind of basis functions from which the fit is built, which provide the essential flexibility, has proved challenging. In this work, we introduce a procedure that combines adiabatic state and diabatic state data to efficiently optimize the nonlinear parameters and basis function expansion. Further, we consider using direct properties based diabatizations to initialize the fitting procedure. To address this issue, we introduce a systematic method for eliminating the debilitating (diabolical) singularities in the defining equations of properties based diabatizations. We exploit the observation that if approximate diabatic data are available, the commonly used approach of fitting each matrix element of H(d) individually provides a starting point (seed) from which convergence of the full H(d) construction algorithm is rapid. The optimization of nonlinear parameters and basis functions and the elimination of debilitating singularities are, respectively, illustrated using the 1,2,3,4(1)A states of phenol and the 1,2(1)A states of NH3, states which are coupled by conical intersections.
Memory reactivation in healthy aging: evidence of stimulus-specific dedifferentiation.
St-Laurent, Marie; Abdi, Hervé; Bondad, Ashley; Buchsbaum, Bradley R
2014-03-19
We investigated how aging affects the neural specificity of mental replay, the act of conjuring up past experiences in one's mind. We used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis to quantify the similarity between brain activity elicited by the perception and memory of complex multimodal stimuli. Young and older human adults viewed and mentally replayed short videos from long-term memory while undergoing fMRI. We identified a wide array of cortical regions involved in visual, auditory, and spatial processing that supported stimulus-specific representation at perception as well as during mental replay. Evidence of age-related dedifferentiation was subtle at perception but more salient during mental replay, and age differences at perception could not account for older adults' reduced neural reactivation specificity. Performance on a post-scan recognition task for video details correlated with neural reactivation in young but not in older adults, indicating that in-scan reactivation benefited post-scan recognition in young adults, but that some older adults may have benefited from alternative rehearsal strategies. Although young adults recalled more details about the video stimuli than older adults on a post-scan recall task, patterns of neural reactivation correlated with post-scan recall in both age groups. These results demonstrate that the mechanisms supporting recall and recollection are linked to accurate neural reactivation in both young and older adults, but that age affects how efficiently these mechanisms can support memory's representational specificity in a way that cannot simply be accounted for by degraded sensory processes.
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
NASA Astrophysics Data System (ADS)
Hämmerle, M.; Lukač, N.; Chen, K.-C.; Koma, Zs.; Wang, C.-K.; Anders, K.; Höfle, B.
2017-09-01
Information about the 3D structure of understory vegetation is of high relevance in forestry research and management (e.g., for complete biomass estimations). However, it has been hardly investigated systematically with state-of-the-art methods such as static terrestrial laser scanning (TLS) or laser scanning from unmanned aerial vehicle platforms (ULS). A prominent challenge for scanning forests is posed by occlusion, calling for proper TLS scan position or ULS flight line configurations in order to achieve an accurate representation of understory vegetation. The aim of our study is to examine the effect of TLS or ULS scanning strategies on (1) the height of individual understory trees and (2) understory canopy height raster models. We simulate full-waveform TLS and ULS point clouds of a virtual forest plot captured from various combinations of max. 12 TLS scan positions or 3 ULS flight lines. The accuracy of the respective datasets is evaluated with reference values given by the virtually scanned 3D triangle mesh tree models. TLS tree height underestimations range up to 1.84 m (15.30 % of tree height) for single TLS scan positions, but combining three scan positions reduces the underestimation to maximum 0.31 m (2.41 %). Combining ULS flight lines also results in improved tree height representation, with a maximum underestimation of 0.24 m (2.15 %). The presented simulation approach offers a complementary source of information for efficient planning of field campaigns aiming at understory vegetation modelling.