NASA Astrophysics Data System (ADS)
Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng
2018-02-01
Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
NASA Astrophysics Data System (ADS)
Hao, Qiushi; Zhang, Xin; Wang, Yan; Shen, Yi; Makis, Viliam
2018-07-01
Acoustic emission (AE) technology is sensitive to subliminal rail defects, however strong wheel-rail contact rolling noise under high-speed condition has gravely impeded detecting of rail defects using traditional denoising methods. In this context, the paper develops an adaptive detection method for rail cracks, which combines multiresolution analysis with an improved adaptive line enhancer (ALE). To obtain elaborate multiresolution information of transient crack signals with low computational cost, lifting scheme-based undecimated wavelet packet transform is adopted. In order to feature the impulsive property of crack signals, a Shannon entropy-improved ALE is proposed as a signal enhancing approach, where Shannon entropy is introduced to improve the cost function. Then a rail defect detection plan based on the proposed method for high-speed condition is put forward. From theoretical analysis and experimental verification, it is demonstrated that the proposed method has superior performance in enhancing the rail defect AE signal and reducing the strong background noise, offering an effective multiresolution approach for rail defect detection under high-speed and strong-noise condition.
Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund
2012-01-01
Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the Shannon entropy of pixel intensity.To test our approach, we specifically use the green band of Landsat images for a water conservation area in the Florida Everglades. We validate our predictions against data of species occurrences for a twenty-eight years long period for both wet and dry seasons. Our method correctly predicts 73% of species richness. For species turnover, the newly proposed KL divergence prediction performance is near 100% accurate. This represents a significant improvement over the more conventional Shannon entropy difference, which provides 85% accuracy. Furthermore, we find that changes in soil and water patterns, as measured by fluctuations of the Shannon entropy for the red and blue bands respectively, are positively correlated with changes in vegetation. The fluctuations are smaller in the wet season when compared to the dry season. Conclusions/Significance Texture-based statistical multiresolution image analysis is a promising method for quantifying interseasonal differences and, consequently, the degree to which vegetation, soil, and water patterns vary. The proposed automated method for quantifying species richness and turnover can also provide analysis at higher spatial and temporal resolution than is currently obtainable from expensive monitoring campaigns, thus enabling more prompt, more cost effective inference and decision making support regarding anomalous variations in biodiversity. Additionally, a matrix-based visualization of the statistical multiresolution analysis is presented to facilitate both insight and quick recognition of anomalous data. PMID:23115629
Torres, M E; Añino, M M; Schlotthauer, G
2003-12-01
It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.
NASA Astrophysics Data System (ADS)
Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana
2018-01-01
This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Entropy as a measure of diffusion
NASA Astrophysics Data System (ADS)
Aghamohammadi, Amir; Fatollahi, Amir H.; Khorrami, Mohammad; Shariati, Ahmad
2013-10-01
The time variation of entropy, as an alternative to the variance, is proposed as a measure of the diffusion rate. It is shown that for linear and time-translationally invariant systems having a large-time limit for the density, at large times the entropy tends exponentially to a constant. For systems with no stationary density, at large times the entropy is logarithmic with a coefficient specifying the speed of the diffusion. As an example, the large-time behaviors of the entropy and the variance are compared for various types of fractional-derivative diffusions.
Diffusive mixing and Tsallis entropy
O'Malley, Daniel; Vesselinov, Velimir V.; Cushman, John H.
2015-04-29
Brownian motion, the classical diffusive process, maximizes the Boltzmann-Gibbs entropy. The Tsallis q-entropy, which is non-additive, was developed as an alternative to the classical entropy for systems which are non-ergodic. A generalization of Brownian motion is provided that maximizes the Tsallis entropy rather than the Boltzmann-Gibbs entropy. This process is driven by a Brownian measure with a random diffusion coefficient. In addition, the distribution of this coefficient is derived as a function of q for 1 < q < 3. Applications to transport in porous media are considered.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen; Magin, Richard L.
2016-07-01
Analytical solutions to the fractional diffusion equation are often obtained by using Laplace and Fourier transforms, which conveniently encode the order of the time and the space derivatives (α and β) as non-integer powers of the conjugate transform variables (s, and k) for the spectral and the spatial frequencies, respectively. This study presents a new solution to the fractional diffusion equation obtained using the Laplace transform and expressed as a Fox's H-function. This result clearly illustrates the kinetics of the underlying stochastic process in terms of the Laplace spectral frequency and entropy. The spectral entropy is numerically calculated by using the direct integration method and the adaptive Gauss-Kronrod quadrature algorithm. Here, the properties of spectral entropy are investigated for the cases of sub-diffusion and super-diffusion. We find that the overall spectral entropy decreases with the increasing α and β, and that the normal or Gaussian case with α = 1 and β = 2, has the lowest spectral entropy (i.e., less information is needed to describe the state of a Gaussian process). In addition, as the neighborhood over which the entropy is calculated increases, the spectral entropy decreases, which implies a spatial averaging or coarse graining of the material properties. Consequently, the spectral entropy is shown to provide a new way to characterize the temporal correlation of anomalous diffusion. Future studies should be designed to examine changes of spectral entropy in physical, chemical and biological systems undergoing phase changes, chemical reactions and tissue regeneration.
The Shannon entropy as a measure of diffusion in multidimensional dynamical systems
NASA Astrophysics Data System (ADS)
Giordano, C. M.; Cincotta, P. M.
2018-05-01
In the present work, we introduce two new estimators of chaotic diffusion based on the Shannon entropy. Using theoretical, heuristic and numerical arguments, we show that the entropy, S, provides a measure of the diffusion extent of a given small initial ensemble of orbits, while an indicator related with the time derivative of the entropy, S', estimates the diffusion rate. We show that in the limiting case of near ergodicity, after an appropriate normalization, S' coincides with the standard homogeneous diffusion coefficient. The very first application of this formulation to a 4D symplectic map and to the Arnold Hamiltonian reveals very successful and encouraging results.
New Insights into the Fractional Order Diffusion Equation Using Entropy and Kurtosis.
Ingo, Carson; Magin, Richard L; Parrish, Todd B
2014-11-01
Fractional order derivative operators offer a concise description to model multi-scale, heterogeneous and non-local systems. Specifically, in magnetic resonance imaging, there has been recent work to apply fractional order derivatives to model the non-Gaussian diffusion signal, which is ubiquitous in the movement of water protons within biological tissue. To provide a new perspective for establishing the utility of fractional order models, we apply entropy for the case of anomalous diffusion governed by a fractional order diffusion equation generalized in space and in time. This fractional order representation, in the form of the Mittag-Leffler function, gives an entropy minimum for the integer case of Gaussian diffusion and greater values of spectral entropy for non-integer values of the space and time derivatives. Furthermore, we consider kurtosis, defined as the normalized fourth moment, as another probabilistic description of the fractional time derivative. Finally, we demonstrate the implementation of anomalous diffusion, entropy and kurtosis measurements in diffusion weighted magnetic resonance imaging in the brain of a chronic ischemic stroke patient.
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
A Subband Coding Method for HDTV
NASA Technical Reports Server (NTRS)
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways
Galinsky, Vitaly L.; Frank, Lawrence R.
2015-01-01
We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167
Entropy-scaling laws for diffusion coefficients in liquid metals under high pressures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Qi-Long, E-mail: qlcao@mail.ustc.edu.cn; Shao, Ju-Xiang; Wang, Fan-Hou, E-mail: eatonch@gmail.com
2015-04-07
Molecular dynamic simulations on the liquid copper and tungsten are used to investigate the empirical entropy-scaling laws D{sup *}=A exp(BS{sub ex}), proposed independently by Rosenfeld and Dzugutov for diffusion coefficient, under high pressure conditions. We show that the scaling laws hold rather well for them under high pressure conditions. Furthermore, both the original diffusion coefficients and the reduced diffusion coefficients exhibit an Arrhenius relationship D{sub M}=D{sub M}{sup 0} exp(−E{sub M}/K{sub B}T), (M=un,R,D) and the activation energy E{sub M} increases with increasing pressure, the diffusion pre-exponential factors (D{sub R}{sup 0} and D{sub D}{sup 0}) are nearly independent of the pressure and element. Themore » pair correlation entropy, S{sub 2}, depends linearly on the reciprocal temperature S{sub 2}=−E{sub S}/T, and the activation energy, E{sub S}, increases with increasing pressure. In particular, the ratios of the activation energies (E{sub un}, E{sub R}, and E{sub D}) obtained from diffusion coefficients to the activation energy, E{sub S}, obtained from the entropy keep constants in the whole pressure range. Therefore, the entropy-scaling laws for the diffusion coefficients and the Arrhenius law are linked via the temperature dependence of entropy.« less
On Entropy Production in the Madelung Fluid and the Role of Bohm's Potential in Classical Diffusion
NASA Astrophysics Data System (ADS)
Heifetz, Eyal; Tsekov, Roumen; Cohen, Eliahu; Nussinov, Zohar
2016-07-01
The Madelung equations map the non-relativistic time-dependent Schrödinger equation into hydrodynamic equations of a virtual fluid. While the von Neumann entropy remains constant, we demonstrate that an increase of the Shannon entropy, associated with this Madelung fluid, is proportional to the expectation value of its velocity divergence. Hence, the Shannon entropy may grow (or decrease) due to an expansion (or compression) of the Madelung fluid. These effects result from the interference between solutions of the Schrödinger equation. Growth of the Shannon entropy due to expansion is common in diffusive processes. However, in the latter the process is irreversible while the processes in the Madelung fluid are always reversible. The relations between interference, compressibility and variation of the Shannon entropy are then examined in several simple examples. Furthermore, we demonstrate that for classical diffusive processes, the "force" accelerating diffusion has the form of the positive gradient of the quantum Bohm potential. Expressing then the diffusion coefficient in terms of the Planck constant reveals the lower bound given by the Heisenberg uncertainty principle in terms of the product between the gas mean free path and the Brownian momentum.
A multi-resolution approach to electromagnetic modeling.
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-04-01
We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
Guan, Yue; Li, Weifeng; Jiang, Zhuoran; Chen, Ying; Liu, Song; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-12-01
This study aimed to develop whole-lesion apparent diffusion coefficient (ADC)-based entropy-related parameters of cervical cancer to preliminarily assess intratumoral heterogeneity of this lesion in comparison to adjacent normal cervical tissues. A total of 51 women (mean age, 49 years) with cervical cancers confirmed by biopsy underwent 3-T pelvic diffusion-weighted magnetic resonance imaging with b values of 0 and 800 s/mm 2 prospectively. ADC-based entropy-related parameters including first-order entropy and second-order entropies were derived from the whole tumor volume as well as adjacent normal cervical tissues. Intraclass correlation coefficient, Wilcoxon test with Bonferroni correction, Kruskal-Wallis test, and receiver operating characteristic curve were used for statistical analysis. All the parameters showed excellent interobserver agreement (all intraclass correlation coefficients > 0.900). Entropy, entropy(H) 0 , entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean were significantly higher, whereas entropy(H) range and entropy(H) std were significantly lower in cervical cancers compared to adjacent normal cervical tissues (all P <.0001). Kruskal-Wallis test showed that there were no significant differences among the values of various second-order entropies including entropy(H) 0, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean. All second-order entropies had larger area under the receiver operating characteristic curve than first-order entropy in differentiating cervical cancers from adjacent normal cervical tissues. Further, entropy(H) 45 , entropy(H) 90 , entropy(H) 135 , and entropy(H) mean had the same largest area under the receiver operating characteristic curve of 0.867. Whole-lesion ADC-based entropy-related parameters of cervical cancers were developed successfully, which showed initial potential in characterizing intratumoral heterogeneity in comparison to adjacent normal cervical tissues. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Analyze the dynamic features of rat EEG using wavelet entropy.
Feng, Zhouyan; Chen, Hang
2005-01-01
Wavelet entropy (WE), a new method of complexity measure for non-stationary signals, was used to investigate the dynamic features of rat EEGs under three vigilance states. The EEGs of the freely moving rats were recorded with implanted electrodes and were decomposed into four components of delta, theta, alpha and beta by using multi-resolution wavelet transform. Then, the wavelet entropy curves were calculated as a function of time. The results showed that there were significant differences among the average WEs of EEGs recorded under the vigilance states of waking, slow wave sleep (SWS) and rapid eye movement (REM) sleep. The changes of WE had different relationships with the four power components under different states. Moreover, there was evident rhythm in EEG WEs of SWS sleep for most experimental rats, which indicated a reciprocal relationship between slow waves and sleep spindles in the micro-states of SWS sleep. Therefore, WE can be used not only to distinguish the long-term changes in EEG complexity, but also to reveal the short-term changes in EEG micro-state.
Delchini, Marc O.; Ragusa, Jean C.; Ferguson, Jim
2017-02-17
A viscous regularization technique, based on the local entropy residual, was proposed by Delchini et al. (2015) to stabilize the nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations using an artificial viscosity technique. This viscous regularization is modulated by the local entropy production and is consistent with the entropy minimum principle. However, Delchini et al. (2015) only based their work on the hyperbolic parts of the Grey Radiation-Hydrodynamic equations and thus omitted the relaxation and diffusion terms present in the material energy and radiation energy equations. Here in this paper, we extend the theoretical grounds for the method and derive an entropy minimum principlemore » for the full set of nonequilibrium-diffusion Grey Radiation-Hydrodynamic equations. This further strengthens the applicability of the entropy viscosity method as a stabilization technique for radiation-hydrodynamic shock simulations. Radiative shock calculations using constant and temperature-dependent opacities are compared against semi-analytical reference solutions, and we present a procedure to perform spatial convergence studies of such simulations.« less
Diffusivity anomaly in modified Stillinger-Weber liquids
NASA Astrophysics Data System (ADS)
Sengupta, Shiladitya; Vasisht, Vishwas V.; Sastry, Srikanth
2014-01-01
By modifying the tetrahedrality (the strength of the three body interactions) in the well-known Stillinger-Weber model for silicon, we study the diffusivity of a series of model liquids as a function of tetrahedrality and temperature at fixed pressure. Previous work has shown that at constant temperature, the diffusivity exhibits a maximum as a function of tetrahedrality, which we refer to as the diffusivity anomaly, in analogy with the well-known anomaly in water upon variation of pressure at constant temperature. We explore to what extent the structural and thermodynamic changes accompanying changes in the interaction potential can help rationalize the diffusivity anomaly, by employing the Rosenfeld relation between diffusivity and the excess entropy (over the ideal gas reference value), and the pair correlation entropy, which provides an approximation to the excess entropy in terms of the pair correlation function. We find that in the modified Stillinger-Weber liquids, the Rosenfeld relation works well above the melting temperatures but exhibits deviations below, with the deviations becoming smaller for smaller tetrahedrality. Further we find that both the excess entropy and the pair correlation entropy at constant temperature go through maxima as a function of the tetrahedrality, thus demonstrating the close relationship between structural, thermodynamic, and dynamical anomalies in the modified Stillinger-Weber liquids.
Multiresolution 3-D reconstruction from side-scan sonar images.
Coiras, Enrique; Petillot, Yvan; Lane, David M
2007-02-01
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.
NASA Astrophysics Data System (ADS)
Sasaki, Youhei; Takehiro, Shin-ichi; Ishiwatari, Masaki; Yamada, Michio
2018-03-01
Linear stability analysis of anelastic thermal convection in a rotating spherical shell with entropy diffusivities varying in the radial direction is performed. The structures of critical convection are obtained in the cases of four different radial distributions of entropy diffusivity; (1) κ is constant, (2) κT0 is constant, (3) κρ0 is constant, and (4) κρ0T0 is constant, where κ is the entropy diffusivity, T0 is the temperature of basic state, and ρ0 is the density of basic state, respectively. The ratio of inner and outer radii, the Prandtl number, the polytropic index, and the density ratio are 0.35, 1, 2, and 5, respectively. The value of the Ekman number is 10-3 or 10-5 . In the case of (1), where the setup is same as that of the anelastic dynamo benchmark (Jones et al., 2011), the structure of critical convection is concentrated near the outer boundary of the spherical shell around the equator. However, in the cases of (2), (3) and (4), the convection columns attach the inner boundary of the spherical shell. A rapidly rotating annulus model for anelastic systems is developed by assuming that convection structure is uniform in the axial direction taking into account the strong effect of Coriolis force. The annulus model well explains the characteristics of critical convection obtained numerically, such as critical azimuthal wavenumber, frequency, Rayleigh number, and the cylindrically radial location of convection columns. The radial distribution of entropy diffusivity, or more generally, diffusion properties in the entropy equation, is important for convection structure, because it determines the distribution of radial basic entropy gradient which is crucial for location of convection columns.
Stokes-Einstein relation and excess entropy in Al-rich Al-Cu melts
NASA Astrophysics Data System (ADS)
Pasturel, A.; Jakse, N.
2016-07-01
We investigate the conditions for the validity of the Stokes-Einstein relation that connects diffusivity to viscosity in melts using entropy-scaling relationships developed by Rosenfeld. Employing ab initio molecular dynamics simulations to determine transport and structural properties of liquid Al1-xCux alloys (with composition x ≤ 0.4), we first show that reduced self-diffusion coefficients and viscosities, according to Rosenfeld's formulation, scale with the two-body approximation of the excess entropy except the reduced viscosity for x = 0.4. Then, we use our findings to evidence that the Stokes-Einstein relation using effective atomic radii is not valid in these alloys while its validity can be related to the temperature dependence of the partial pair-excess entropies of both components. Finally, we derive a relation between the ratio of the self-diffusivities of the components and the ratio of their pair excess entropies.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
On the asymptotic behavior of a subcritical convection-diffusion equation with nonlocal diffusion
NASA Astrophysics Data System (ADS)
Cazacu, Cristian M.; Ignat, Liviu I.; Pazoto, Ademir F.
2017-08-01
In this paper we consider a subcritical model that involves nonlocal diffusion and a classical convective term. In spite of the nonlocal diffusion, we obtain an Oleinik type estimate similar to the case when the diffusion is local. First we prove that the entropy solution can be obtained by adding a small viscous term μ uxx and letting μ\\to 0 . Then, by using uniform Oleinik estimates for the viscous approximation we are able to prove the well-posedness of the entropy solutions with L 1-initial data. Using a scaling argument and hyperbolic estimates given by Oleinik’s inequality, we obtain the first term in the asymptotic behavior of the nonnegative solutions. Finally, the large time behavior of changing sign solutions is proved using the classical flux-entropy method and estimates for the nonlocal operator.
Stokes–Einstein relation and excess entropy in Al-rich Al-Cu melts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasturel, A.; Jakse, N.
We investigate the conditions for the validity of the Stokes-Einstein relation that connects diffusivity to viscosity in melts using entropy-scaling relationships developed by Rosenfeld. Employing ab initio molecular dynamics simulations to determine transport and structural properties of liquid Al{sub 1−x}Cu{sub x} alloys (with composition x ≤ 0.4), we first show that reduced self-diffusion coefficients and viscosities, according to Rosenfeld's formulation, scale with the two-body approximation of the excess entropy except the reduced viscosity for x = 0.4. Then, we use our findings to evidence that the Stokes-Einstein relation using effective atomic radii is not valid in these alloys while its validity can be relatedmore » to the temperature dependence of the partial pair-excess entropies of both components. Finally, we derive a relation between the ratio of the self-diffusivities of the components and the ratio of their pair excess entropies.« less
Mittal, Jeetain; Errington, Jeffrey R; Truskett, Thomas M
2007-08-30
Static measures such as density and entropy, which are intimately connected to structure, have featured prominently in modern thinking about the dynamics of the liquid state. Here, we explore the connections between self-diffusivity, density, and excess entropy for two of the most widely used model "simple" liquids, the equilibrium Lennard-Jones and square-well fluids, in both bulk and confined environments. We find that the self-diffusivity data of the Lennard-Jones fluid can be approximately collapsed onto a single curve (i) versus effective packing fraction and (ii) in appropriately reduced form versus excess entropy, as suggested by two well-known scaling laws. Similar data collapse does not occur for the square-well fluid, a fact that can be understood on the basis of the nontrivial effects that temperature has on its static structure. Nonetheless, we show that the implications of confinement for the self-diffusivity of both of these model fluids, over a broad range of equilibrium conditions, can be predicted on the basis of knowledge of the bulk fluid behavior and either the effective packing fraction or the excess entropy of the confined fluid. Excess entropy is perhaps the most preferable route due to its superior predictive ability and because it is a standard, unambiguous thermodynamic quantity that can be readily predicted via classical density functional theories of inhomogeneous fluids.
Excess entropy scaling for the segmental and global dynamics of polyethylene melts.
Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C
2014-11-28
The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.
Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Liu, Ti C.; Mitra, Sunanda
1996-06-01
Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.
properties, a number of intriguing observations have also been noted in the dependencies of transport properties upon the physicochemical parameters...addition of (non-conducting) particles would block the diffusion pathways (by a factor which depends only the loading of the fillers) and lead to reduction in the conductivity of the ions.
Fozouni, Niloufar; Chopp, Michael; Nejad-Davarani, Siamak P.; Zhang, Zheng Gang; Lehman, Norman L.; Gu, Steven; Ueno, Yuji; Lu, Mei; Ding, Guangliang; Li, Lian; Hu, Jiani; Bagher-Ebadian, Hassan; Hearshen, David; Jiang, Quan
2013-01-01
Background To overcome the limitations of conventional diffusion tensor magnetic resonance imaging resulting from the assumption of a Gaussian diffusion model for characterizing voxels containing multiple axonal orientations, Shannon's entropy was employed to evaluate white matter structure in human brain and in brain remodeling after traumatic brain injury (TBI) in a rat. Methods Thirteen healthy subjects were investigated using a Q-ball based DTI data sampling scheme. FA and entropy values were measured in white matter bundles, white matter fiber crossing areas, different gray matter (GM) regions and cerebrospinal fluid (CSF). Axonal densities' from the same regions of interest (ROIs) were evaluated in Bielschowsky and Luxol fast blue stained autopsy (n = 30) brain sections by light microscopy. As a case demonstration, a Wistar rat subjected to TBI and treated with bone marrow stromal cells (MSC) 1 week after TBI was employed to illustrate the superior ability of entropy over FA in detecting reorganized crossing axonal bundles as confirmed by histological analysis with Bielschowsky and Luxol fast blue staining. Results Unlike FA, entropy was less affected by axonal orientation and more affected by axonal density. A significant agreement (r = 0.91) was detected between entropy values from in vivo human brain and histologically measured axonal density from post mortum from the same brain structures. The MSC treated TBI rat demonstrated that the entropy approach is superior to FA in detecting axonal remodeling after injury. Compared with FA, entropy detected new axonal remodeling regions with crossing axons, confirmed with immunohistological staining. Conclusions Entropy measurement is more effective in distinguishing axonal remodeling after injury, when compared with FA. Entropy is also more sensitive to axonal density than axonal orientation, and thus may provide a more accurate reflection of axonal changes that occur in neurological injury and disease. PMID:24143186
Fozouni, Niloufar; Chopp, Michael; Nejad-Davarani, Siamak P; Zhang, Zheng Gang; Lehman, Norman L; Gu, Steven; Ueno, Yuji; Lu, Mei; Ding, Guangliang; Li, Lian; Hu, Jiani; Bagher-Ebadian, Hassan; Hearshen, David; Jiang, Quan
2013-01-01
To overcome the limitations of conventional diffusion tensor magnetic resonance imaging resulting from the assumption of a Gaussian diffusion model for characterizing voxels containing multiple axonal orientations, Shannon's entropy was employed to evaluate white matter structure in human brain and in brain remodeling after traumatic brain injury (TBI) in a rat. Thirteen healthy subjects were investigated using a Q-ball based DTI data sampling scheme. FA and entropy values were measured in white matter bundles, white matter fiber crossing areas, different gray matter (GM) regions and cerebrospinal fluid (CSF). Axonal densities' from the same regions of interest (ROIs) were evaluated in Bielschowsky and Luxol fast blue stained autopsy (n = 30) brain sections by light microscopy. As a case demonstration, a Wistar rat subjected to TBI and treated with bone marrow stromal cells (MSC) 1 week after TBI was employed to illustrate the superior ability of entropy over FA in detecting reorganized crossing axonal bundles as confirmed by histological analysis with Bielschowsky and Luxol fast blue staining. Unlike FA, entropy was less affected by axonal orientation and more affected by axonal density. A significant agreement (r = 0.91) was detected between entropy values from in vivo human brain and histologically measured axonal density from post mortum from the same brain structures. The MSC treated TBI rat demonstrated that the entropy approach is superior to FA in detecting axonal remodeling after injury. Compared with FA, entropy detected new axonal remodeling regions with crossing axons, confirmed with immunohistological staining. Entropy measurement is more effective in distinguishing axonal remodeling after injury, when compared with FA. Entropy is also more sensitive to axonal density than axonal orientation, and thus may provide a more accurate reflection of axonal changes that occur in neurological injury and disease.
A slow atomic diffusion process in high-entropy glass-forming metallic melts
NASA Astrophysics Data System (ADS)
Chen, Changjiu; Wong, Kaikin; Krishnan, Rithin P.; Embs, Jan P.; Chathoth, Suresh M.
2018-04-01
Quasi-elastic neutron scattering has been used to study atomic relaxation processes in high-entropy glass-forming metallic melts with different glass-forming ability (GFA). The momentum transfer dependence of mean relaxation time shows a highly collective atomic transport process in the alloy melts with the highest and lowest GFA. However, a jump diffusion process is the long-range atomic transport process in the intermediate GFA alloy melt. Nevertheless, atomic mobility close to the melting temperature of these alloy melts is quite similar, and the temperature dependence of the diffusion coefficient exhibits a non-Arrhenius behavior. The atomic mobility in these high-entropy melts is much slower than that of the best glass-forming melts at their respective melting temperatures.
TransCut: interactive rendering of translucent cutouts.
Li, Dongping; Sun, Xin; Ren, Zhong; Lin, Stephen; Tong, Yiying; Guo, Baining; Zhou, Kun
2013-03-01
We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces—all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.
Turing pattern dynamics and adaptive discretization for a super-diffusive Lotka-Volterra model.
Bendahmane, Mostafa; Ruiz-Baier, Ricardo; Tian, Canrong
2016-05-01
In this paper we analyze the effects of introducing the fractional-in-space operator into a Lotka-Volterra competitive model describing population super-diffusion. First, we study how cross super-diffusion influences the formation of spatial patterns: a linear stability analysis is carried out, showing that cross super-diffusion triggers Turing instabilities, whereas classical (self) super-diffusion does not. In addition we perform a weakly nonlinear analysis yielding a system of amplitude equations, whose study shows the stability of Turing steady states. A second goal of this contribution is to propose a fully adaptive multiresolution finite volume method that employs shifted Grünwald gradient approximations, and which is tailored for a larger class of systems involving fractional diffusion operators. The scheme is aimed at efficient dynamic mesh adaptation and substantial savings in computational burden. A numerical simulation of the model was performed near the instability boundaries, confirming the behavior predicted by our analysis.
Diffusion imaging quality control via entropy of principal direction distribution.
Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A
2013-11-15
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Diffusion imaging quality control via entropy of principal direction distribution
Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.
2013-01-01
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. PMID:23684874
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...
2015-02-25
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Multiple Diffusion Mechanisms Due to Nanostructuring in Crowded Environments
Sanabria, Hugo; Kubota, Yoshihisa; Waxham, M. Neal
2007-01-01
One of the key questions regarding intracellular diffusion is how the environment affects molecular mobility. Mostly, intracellular diffusion has been described as hindered, and the physical reasons for this behavior are: immobile barriers, molecular crowding, and binding interactions with immobile or mobile molecules. Using results from multi-photon fluorescence correlation spectroscopy, we describe how immobile barriers and crowding agents affect translational mobility. To study the hindrance produced by immobile barriers, we used sol-gels (silica nanostructures) that consist of a continuous solid phase and aqueous phase in which fluorescently tagged molecules diffuse. In the case of molecular crowding, translational mobility was assessed in increasing concentrations of 500 kDa dextran solutions. Diffusion of fluorescent tracers in both sol-gels and dextran solutions shows clear evidence of anomalous subdiffusion. In addition, data from the autocorrelation function were analyzed using the maximum entropy method as adapted to fluorescence correlation spectroscopy data and compared with the standard model that incorporates anomalous diffusion. The maximum entropy method revealed evidence of different diffusion mechanisms that had not been revealed using the anomalous diffusion model. These mechanisms likely correspond to nanostructuring in crowded environments and to the relative dimensions of the crowding agent with respect to the tracer molecule. Analysis with the maximum entropy method also revealed information about the degree of heterogeneity in the environment as reported by the behavior of diffusive molecules. PMID:17040979
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Diffusion Entropy: A Potential Neuroimaging Biomarker of Bipolar Disorder in the Temporal Pole.
Spuhler, Karl; Bartlett, Elizabeth; Ding, Jie; DeLorenzo, Christine; Parsey, Ramin; Huang, Chuan
2018-02-01
Despite much research, bipolar depression remains poorly understood, with no clinically useful biomarkers for its diagnosis. The paralimbic system has become a target for biomarker research, with paralimbic structural connectivity commonly reported to distinguish bipolar patients from controls in tractography-based diffusion MRI studies, despite inconsistent findings in voxel-based studies. The purpose of this analysis was to validate existing findings with traditional diffusion MRI metrics and investigate the utility of a novel diffusion MRI metric, entropy of diffusion, in the search for bipolar depression biomarkers. We performed group-level analysis on 9 un-medicated (6 medication-naïve; 3 medication-free for at least 33 days) bipolar patients in a major depressive episode and 9 matched healthy controls to compare: (1) average mean diffusivity (MD) and fractional anisotropy (FA) and; (2) MD and FA histogram entropy-a statistical measure of distribution homogeneity-in the amygdala, hippocampus, orbitofrontal cortex and temporal pole. We also conducted classification analyses with leave-one-out and separate testing dataset (N = 11) approaches. We did not observe statistically significant differences in average MD or FA between the groups in any region. However, in the temporal pole, we observed significantly lower MD entropy in bipolar patients; this finding suggests a regional difference in MD distributions in the absence of an average difference. This metric allowed us to accurately characterize bipolar patients from controls in leave-one-out (accuracy = 83%) and prediction (accuracy = 73%) analyses. This novel application of diffusion MRI yielded not only an interesting separation between bipolar patients and healthy controls, but also accurately classified bipolar patients from controls. © 2017 Wiley Periodicals, Inc.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Entropy-based artificial viscosity stabilization for non-equilibrium Grey Radiation-Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delchini, Marc O., E-mail: delchinm@email.tamu.edu; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim, E-mail: jim.morel@tamu.edu
2015-09-01
The entropy viscosity method is extended to the non-equilibrium Grey Radiation-Hydrodynamic equations. The method employs a viscous regularization to stabilize the numerical solution. The artificial viscosity coefficient is modulated by the entropy production and peaks at shock locations. The added dissipative terms are consistent with the entropy minimum principle. A new functional form of the entropy residual, suitable for the Radiation-Hydrodynamic equations, is derived. We demonstrate that the viscous regularization preserves the equilibrium diffusion limit. The equations are discretized with a standard Continuous Galerkin Finite Element Method and a fully implicit temporal integrator within the MOOSE multiphysics framework. The methodmore » of manufactured solutions is employed to demonstrate second-order accuracy in both the equilibrium diffusion and streaming limits. Several typical 1-D radiation-hydrodynamic test cases with shocks (from Mach 1.05 to Mach 50) are presented to establish the ability of the technique to capture and resolve shocks.« less
The nexus between geopolitical uncertainty and crude oil markets: An entropy-based wavelet analysis
NASA Astrophysics Data System (ADS)
Uddin, Gazi Salah; Bekiros, Stelios; Ahmed, Ali
2018-04-01
The global financial crisis and the subsequent geopolitical turbulence in energy markets have brought increased attention to the proper statistical modeling especially of the crude oil markets. In particular, we utilize a time-frequency decomposition approach based on wavelet analysis to explore the inherent dynamics and the casual interrelationships between various types of geopolitical, economic and financial uncertainty indices and oil markets. Via the introduction of a mixed discrete-continuous multiresolution analysis, we employ the entropic criterion for the selection of the optimal decomposition level of a MODWT as well as the continuous-time coherency and phase measures for the detection of business cycle (a)synchronization. Overall, a strong heterogeneity in the revealed interrelationships is detected over time and across scales.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Two-phase thermodynamic model for computing entropies of liquids reanalyzed
NASA Astrophysics Data System (ADS)
Sun, Tao; Xian, Jiawei; Zhang, Huai; Zhang, Zhigang; Zhang, Yigang
2017-11-01
The two-phase thermodynamic (2PT) model [S.-T. Lin et al., J. Chem. Phys. 119, 11792-11805 (2003)] provides a promising paradigm to efficiently determine the ionic entropies of liquids from molecular dynamics. In this model, the vibrational density of states (VDoS) of a liquid is decomposed into a diffusive gas-like component and a vibrational solid-like component. By treating the diffusive component as hard sphere (HS) gas and the vibrational component as harmonic oscillators, the ionic entropy of the liquid is determined. Here we examine three issues crucial for practical implementations of the 2PT model: (i) the mismatch between the VDoS of the liquid system and that of the HS gas; (ii) the excess entropy of the HS gas; (iii) the partition of the gas-like and solid-like components. Some of these issues have not been addressed before, yet they profoundly change the entropy predicted from the model. Based on these findings, a revised 2PT formalism is proposed and successfully tested in systems with Lennard-Jones potentials as well as many-atom potentials of liquid metals. Aside from being capable of performing quick entropy estimations for a wide range of systems, the formalism also supports fine-tuning to accurately determine entropies at specific thermal states.
Melting properties of Pt and its transport coefficients in liquid states under high pressures
NASA Astrophysics Data System (ADS)
Wang, Pan-Pan; Shao, Ju-Xiang; Cao, Qi-Long
2016-11-01
Molecular dynamics (MD) simulations of the melting and transport properties in liquid states of platinum for the pressure range (50-200 GPa) are reported. The melting curve of platinum is consistent with previous ab initio MD simulation results and the first-principles melting curve. Calculated results for the pressure dependence of fusion entropy and fusion volume show that the fusion entropy and the fusion volume decrease with increasing pressure, and the ratio of the fusion volume to fusion entropy roughly reproduces the melting slope, which has a moderate decrease along the melting line. The Arrhenius law well describes the temperature dependence of self-diffusion coefficients and viscosity under high pressure, and the diffusion activation energy decreases with increasing pressure, while the viscosity activation energy increases with increasing pressure. In addition, the entropy-scaling law, proposed by Rosenfeld under ambient pressure, still holds well for liquid Pt under high pressure conditions.
Roushangar, Kiyoumars; Alizadeh, Farhad; Adamowski, Jan
2018-08-01
Understanding precipitation on a regional basis is an important component of water resources planning and management. The present study outlines a methodology based on continuous wavelet transform (CWT) and multiscale entropy (CWME), combined with self-organizing map (SOM) and k-means clustering techniques, to measure and analyze the complexity of precipitation. Historical monthly precipitation data from 1960 to 2010 at 31 rain gauges across Iran were preprocessed by CWT. The multi-resolution CWT approach segregated the major features of the original precipitation series by unfolding the structure of the time series which was often ambiguous. The entropy concept was then applied to components obtained from CWT to measure dispersion, uncertainty, disorder, and diversification of subcomponents. Based on different validity indices, k-means clustering captured homogenous areas more accurately, and additional analysis was performed based on the outcome of this approach. The 31 rain gauges in this study were clustered into 6 groups, each one having a unique CWME pattern across different time scales. The results of clustering showed that hydrologic similarity (multiscale variation of precipitation) was not based on geographic contiguity. According to the pattern of entropy across the scales, each cluster was assigned an entropy signature that provided an estimation of the entropy pattern of precipitation data in each cluster. Based on the pattern of mean CWME for each cluster, a characteristic signature was assigned, which provided an estimation of the CWME of a cluster across scales of 1-2, 3-8, and 9-13 months relative to other stations. The validity of the homogeneous clusters demonstrated the usefulness of the proposed approach to regionalize precipitation. Further analysis based on wavelet coherence (WTC) was performed by selecting central rain gauges in each cluster and analyzing against temperature, wind, Multivariate ENSO index (MEI), and East Atlantic (EA) and North Atlantic Oscillation (NAO), indeces. The results revealed that all climatic features except NAO influenced precipitation in Iran during the 1960-2010 period. Copyright © 2018 Elsevier Inc. All rights reserved.
Two Dimensional Drug Diffusion Between Nanoparticles and Fractal Tumors
NASA Astrophysics Data System (ADS)
Samioti, S. E.; Karamanos, K.; Tsiantis, A.; Papathanasiou, A.; Sarris, I.
2017-11-01
Drug delivery methods based on nanoparticles are some of the most promising medical applications in nanotechnology to treat cancer. It is observed that drug released by nanoparticles to the cancer tumors may be driven by diffusion. A fractal tumor boundary of triangular Von Koch shape is considered here and the diffusion mechanism is studied for different drug concentrations and increased fractality. A high order Finite Elements method based on the Fenics library is incorporated in fine meshes to fully resolve these irregular boundaries. Drug concentration, its transfer rates and entropy production are calculated in an up to forth order fractal iteration boundaries. We observed that diffusion rate diminishes for successive prefractal generations. Also, the entropy production around the system changes greatly as the order of the fractal curve increases. Results indicate with precision where the active sites are, in which most of the diffusion takes place and thus drug arrives to the tumor.
Image Analysis Using Quantum Entropy Scale Space and Diffusion Concepts
2009-11-01
images using a combination of analytic methods and prototype Matlab and Mathematica programs. We investigated concepts of generalized entropy and...Schmidt strength from quantum logic gate decomposition. This form of entropy gives a measure of the nonlocal content of an entangling logic gate...11 We recall that the Schmidt number is an indicator of entanglement , but not a measure of entanglement . For instance, let us compare
NASA Astrophysics Data System (ADS)
Weinketz, Sieghard
1998-07-01
The reordering kinetics of a diffusion lattice-gas system of adsorbates with nearest- and next-nearest-neighbor interactions on a square lattice is studied within a dynamic Monte Carlo simulation, as it evolves towards the equilibrium from a given initial configuration, at a constant temperature. The diffusion kinetics proceeds through adsorbate hoppings to empty nearest-neighboring sites (Kawasaki dynamics). The Monte Carlo procedure allows a ``real'' time definition from the local transition rates, and the configurational entropy and internal energy can be obtained from the lattice configuration at any instant t by counting the local clusters and using the C2 approximation of the cluster variation method. These state functions are then used in their nonequilibrium form as a direct measure of reordering along the time. Different reordering processes are analyzed within this approach, presenting a rich variety of behaviors. It can also be shown that the time derivative of entropy (times temperature) is always equal to or lower than the time derivative of energy, and that the reordering path is always strongly dependent on the initial order, presenting in some cases an ``invariance'' of the entropy function to the magnitude of the interactions as far as the final order is unaltered.
Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman
2015-01-01
A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE-MAP algorithm resulted in comparable regional mean values to those from the maximum likelihood algorithm while reducing noise. Achieving robust performance in various noise-level simulation and patient studies, the WJE-MAP algorithm demonstrates its potential in clinical quantitative PET imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (∼0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (∼0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (∼10^{2}), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
NASA Astrophysics Data System (ADS)
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
Quantitative characterization of brazing performance for Sn-plated silver alloy fillers
NASA Astrophysics Data System (ADS)
Wang, Xingxing; Peng, Jin; Cui, Datian
2017-12-01
Two types of AgCuZnSn fillers were prepared based on BAg50CuZn and BAg34CuZnSn alloy through a combinative process of electroplating and thermal diffusion. The models of wetting entropy and joint strength entropy of AgCuZnSn filler metals were established. The wetting entropy of the Sn-plated silver brazing alloys are lower than the traditional fillers, and its joint strength entropy value is slightly higher than the latter. The wetting entropy value of the Sn-plated brazing alloys and traditional filler metal are similar to the change trend of the wetting area. The trend of the joint strength entropy value with those fillers are consisted with the tensile strength of the stainless steel joints with the increase of Sn content.
NASA Technical Reports Server (NTRS)
Goldberg, Louis F.
1992-01-01
Aspects of the information propagation modeling behavior of integral machine computer simulation programs are investigated in terms of a transmission line. In particular, the effects of pressure-linking and temporal integration algorithms on the amplitude ratio and phase angle predictions are compared against experimental and closed-form analytic data. It is concluded that the discretized, first order conservation balances may not be adequate for modeling information propagation effects at characteristic numbers less than about 24. An entropy transport equation suitable for generalized use in Stirling machine simulation is developed. The equation is evaluated by including it in a simulation of an incompressible oscillating flow apparatus designed to demonstrate the effect of flow oscillations on the enhancement of thermal diffusion. Numerical false diffusion is found to be a major factor inhibiting validation of the simulation predictions with experimental and closed-form analytic data. A generalized false diffusion correction algorithm is developed which allows the numerical results to match their analytic counterparts. Under these conditions, the simulation yields entropy predictions which satisfy Clausius' inequality.
Symmetrization of conservation laws with entropy for high-temperature hypersonic computations
NASA Technical Reports Server (NTRS)
Chalot, F.; Hughes, T. J. R.; Shakib, F.
1990-01-01
Results of Hughes, France, and Mallet are generalized to conservation law systems taking into account high-temperature effects. Symmetric forms of different equation sets are derived in terms of entropy variables. First, the case of a general divariant gas is studied; it can be specialized to the usual Navier-Stokes equations, as well as to situations where the gas is vibrationally excited, and undergoes equilibrium chemical reactions. The case of gas in thermochemical nonequilibrium is considered next. Transport phenomena, and in particular mass diffusion, are examined in the framework of symmetric advective-diffusive systems.
A multiresolution halftoning algorithm for progressive display
NASA Astrophysics Data System (ADS)
Mukherjee, Mithun; Sharma, Gaurav
2005-01-01
We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.
Comparison of liquid-state anomalies in Stillinger-Weber models of water, silicon, and germanium
NASA Astrophysics Data System (ADS)
Dhabal, Debdas; Chakravarty, Charusita; Molinero, Valeria; Kashyap, Hemant K.
2016-12-01
We use molecular dynamics simulations to compare and contrast the liquid-state anomalies in the Stillinger-Weber models of monatomic water (mW), silicon (Si), and germanium (Ge) over a fairly wide range of temperatures and densities. The relationships between structure, entropy, and mobility, as well as the extent of the regions of anomalous behavior, are discussed as a function of the degree of tetrahedrality. We map out the cascade of density, structural, pair entropy, excess entropy, viscosity, and diffusivity anomalies for these three liquids. Among the three liquids studied here, only mW displays anomalies in the thermal conductivity, and this anomaly is evident only at very low temperatures. Diffusivity and viscosity, on the other hand, show pronounced anomalous regions for the three liquids. The temperature of maximum density of the three liquids shows re-entrant behavior consistent with either singularity-free or liquid-liquid critical point scenarios proposed to explain thermodynamic anomalies. The order-map, which shows the evolution of translational versus tetrahedral order in liquids, is different for Ge than for Si and mW. We find that although the monatomic water reproduces several thermodynamic and dynamic properties of rigid-body water models (e.g., SPC/E, TIP4P/2005), its sequence of anomalies follows, the same as Si and Ge, the silica-like hierarchy: the region of dynamic (diffusivity and viscosity) anomalies encloses the region of structural anomalies, which in turn encloses the region of density anomaly. The hierarchy of the anomalies based on excess entropy and Rosenfeld scaling, on the other hand, reverses the order of the structural and dynamic anomalies, i.e., predicts that the three Stillinger-Weber liquids follow a water-like hierarchy of anomalies. We investigate the scaling of diffusivity, viscosity, and thermal conductivity with the excess entropy of the liquid and find that for dynamical properties that present anomalies there is no universal scaling of the reduced property with excess entropy for the whole range of temperatures and densities. Instead, Rosenfeld's scaling holds for all the three liquids at high densities and high temperatures, although deviations from simple exponential dependence are observed for diffusivity and viscosity at lower temperatures and intermediate densities. The slope of the scaling of transport properties obtained for Ge is comparable to that obtained for simple liquids, suggesting that this low tetrahedrality liquid, although it stabilizes a diamond crystal, is already close to simple liquid behavior for certain properties.
Comparison of liquid-state anomalies in Stillinger-Weber models of water, silicon, and germanium.
Dhabal, Debdas; Chakravarty, Charusita; Molinero, Valeria; Kashyap, Hemant K
2016-12-07
We use molecular dynamics simulations to compare and contrast the liquid-state anomalies in the Stillinger-Weber models of monatomic water (mW), silicon (Si), and germanium (Ge) over a fairly wide range of temperatures and densities. The relationships between structure, entropy, and mobility, as well as the extent of the regions of anomalous behavior, are discussed as a function of the degree of tetrahedrality. We map out the cascade of density, structural, pair entropy, excess entropy, viscosity, and diffusivity anomalies for these three liquids. Among the three liquids studied here, only mW displays anomalies in the thermal conductivity, and this anomaly is evident only at very low temperatures. Diffusivity and viscosity, on the other hand, show pronounced anomalous regions for the three liquids. The temperature of maximum density of the three liquids shows re-entrant behavior consistent with either singularity-free or liquid-liquid critical point scenarios proposed to explain thermodynamic anomalies. The order-map, which shows the evolution of translational versus tetrahedral order in liquids, is different for Ge than for Si and mW. We find that although the monatomic water reproduces several thermodynamic and dynamic properties of rigid-body water models (e.g., SPC/E, TIP4P/2005), its sequence of anomalies follows, the same as Si and Ge, the silica-like hierarchy: the region of dynamic (diffusivity and viscosity) anomalies encloses the region of structural anomalies, which in turn encloses the region of density anomaly. The hierarchy of the anomalies based on excess entropy and Rosenfeld scaling, on the other hand, reverses the order of the structural and dynamic anomalies, i.e., predicts that the three Stillinger-Weber liquids follow a water-like hierarchy of anomalies. We investigate the scaling of diffusivity, viscosity, and thermal conductivity with the excess entropy of the liquid and find that for dynamical properties that present anomalies there is no universal scaling of the reduced property with excess entropy for the whole range of temperatures and densities. Instead, Rosenfeld's scaling holds for all the three liquids at high densities and high temperatures, although deviations from simple exponential dependence are observed for diffusivity and viscosity at lower temperatures and intermediate densities. The slope of the scaling of transport properties obtained for Ge is comparable to that obtained for simple liquids, suggesting that this low tetrahedrality liquid, although it stabilizes a diamond crystal, is already close to simple liquid behavior for certain properties.
Picosecond to nanosecond dynamics provide a source of conformational entropy for protein folding.
Stadler, Andreas M; Demmel, Franz; Ollivier, Jacques; Seydel, Tilo
2016-08-03
Myoglobin can be trapped in fully folded structures, partially folded molten globules, and unfolded states under stable equilibrium conditions. Here, we report an experimental study on the conformational dynamics of different folded conformational states of apo- and holomyoglobin in solution. Global protein diffusion and internal molecular motions were probed by neutron time-of-flight and neutron backscattering spectroscopy on the picosecond and nanosecond time scales. Global protein diffusion was found to depend on the α-helical content of the protein suggesting that charges on the macromolecule increase the short-time diffusion of protein. With regard to the molten globules, a gel-like phase due to protein entanglement and interactions with neighbouring macromolecules was visible due to a reduction of the global diffusion coefficients on the nanosecond time scale. Diffusion coefficients, residence and relaxation times of internal protein dynamics and root mean square displacements of localised internal motions were determined for the investigated structural states. The difference in conformational entropy ΔSconf of the protein between the unfolded and the partially or fully folded conformations was extracted from the measured root mean square displacements. Using thermodynamic parameters from the literature and the experimentally determined ΔSconf values we could identify the entropic contribution of the hydration shell ΔShydr of the different folded states. Our results point out the relevance of conformational entropy of the protein and the hydration shell for stability and folding of myoglobin.
NASA Astrophysics Data System (ADS)
Sangireddy, H.; Passalacqua, P.; Stark, C. P.
2013-12-01
Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic processes. Also, we explore the variability in hillslope length scales as a function of hillslope diffusivity coefficients and critical shear stress in natural landscapes and show that we can infer signatures of dominant geomorphic processes by analyzing characteristic topographic length scales present in topography. References: Beven, K. and Kirkby, M. J.: A physically based variable contributing area model of basin hydrology, Hydrol. Sci. Bull., 24, 43-69, 1979 Howard, A. D. (1994). A detachment-limited model of drainage basin evolution.Water resources research, 30(7), 2261-2285. Passalacqua, P., Do Trung, T., Foufoula Georgiou, E., Sapiro, G., & Dietrich, W. E. (2010). A geometric framework for channel network extraction from lidar: Nonlinear diffusion and geodesic paths. Journal of Geophysical. Research: Earth Surface (2003-2012), 115(F1). Sangireddy, H., Passalacqua, P., Stark, C.P.(2012). Multi-resolution estimation of lidar-DTM surface flow metrics to identify characteristic topographic length scales, EP13C-0859: AGU Fall meeting 2012. Stark, C. P., & Stark, G. J. (2001). A channelization model of landscape evolution. American Journal of Science, 301(4-5), 486-512. Tucker, G. E., Catani, F., Rinaldo, A., & Bras, R. L. (2001). Statistical analysis of drainage density from digital terrain data. Geomorphology, 36(3), 187-202.
NASA Astrophysics Data System (ADS)
Virtanen, P.; Vischi, F.; Strambini, E.; Carrega, M.; Giazotto, F.
2017-12-01
We discuss the quasiparticle entropy and heat capacity of a dirty superconductor/normal metal/superconductor junction. In the case of short junctions, the inverse proximity effect extending in the superconducting banks plays a crucial role in determining the thermodynamic quantities. In this case, commonly used approximations can violate thermodynamic relations between supercurrent and quasiparticle entropy. We provide analytical and numerical results as a function of different geometrical parameters. Quantitative estimates for the heat capacity can be relevant for the design of caloritronic devices or radiation sensor applications.
Large Eddy Simulation of Entropy Generation in a Turbulent Mixing Layer
NASA Astrophysics Data System (ADS)
Sheikhi, Reza H.; Safari, Mehdi; Hadi, Fatemeh
2013-11-01
Entropy transport equation is considered in large eddy simulation (LES) of turbulent flows. The irreversible entropy generation in this equation provides a more general description of subgrid scale (SGS) dissipation due to heat conduction, mass diffusion and viscosity effects. A new methodology is developed, termed the entropy filtered density function (En-FDF), to account for all individual entropy generation effects in turbulent flows. The En-FDF represents the joint probability density function of entropy, frequency, velocity and scalar fields within the SGS. An exact transport equation is developed for the En-FDF, which is modeled by a system of stochastic differential equations, incorporating the second law of thermodynamics. The modeled En-FDF transport equation is solved by a Lagrangian Monte Carlo method. The methodology is employed to simulate a turbulent mixing layer involving transport of passive scalars and entropy. Various modes of entropy generation are obtained from the En-FDF and analyzed. Predictions are assessed against data generated by direct numerical simulation (DNS). The En-FDF predictions are in good agreements with the DNS data.
Estimation of absolute solvent and solvation shell entropies via permutation reduction
NASA Astrophysics Data System (ADS)
Reinhard, Friedemann; Grubmüller, Helmut
2007-01-01
Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.
SHORT-TERM SOLAR FLARE PREDICTION USING MULTIRESOLUTION PREDICTORS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Daren; Huang Xin; Hu Qinghua
2010-01-20
Multiresolution predictors of solar flares are constructed by a wavelet transform and sequential feature extraction method. Three predictors-the maximum horizontal gradient, the length of neutral line, and the number of singular points-are extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. A maximal overlap discrete wavelet transform is used to decompose the sequence of predictors into four frequency bands. In each band, four sequential features-the maximum, the mean, the standard deviation, and the root mean square-are extracted. The multiresolution predictors in the low-frequency band reflect trends in the evolution of newly emerging fluxes. The multiresolution predictors in the high-frequencymore » band reflect the changing rates in emerging flux regions. The variation of emerging fluxes is decoupled by wavelet transform in different frequency bands. The information amount of these multiresolution predictors is evaluated by the information gain ratio. It is found that the multiresolution predictors in the lowest and highest frequency bands contain the most information. Based on these predictors, a C4.5 decision tree algorithm is used to build the short-term solar flare prediction model. It is found that the performance of the short-term solar flare prediction model based on the multiresolution predictors is greatly improved.« less
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobotka, L.G.; Department of Physics, Washington University, St. Louis, Missouri 63130; Charity, R.J.
2006-01-15
The caloric curve for mononuclear configurations is studied with a model that allows for both increased surface diffusness and self-similar expansion. The evolution of the effective mass with density and excitation is included in a schematic fashion. The entropies, extracted in a local-density approximation, confirm that nuclei posess a soft mode that is predominately a surface expansion. We also find that the mononuclear caloric curve (temperature versus excitation energy) exhibits a plateau. Thus a plateau should be the expectation with or without a multifragmentationlike phase transition. This conclusion is relevant only for reactions that populate the mononuclear region of phasemore » space.« less
Nonlocal approach to nonequilibrium thermodynamics and nonlocal heat diffusion processes
NASA Astrophysics Data System (ADS)
El-Nabulsi, Rami Ahmad
2018-04-01
We study some aspects of nonequilibrium thermodynamics and heat diffusion processes based on Suykens's nonlocal-in-time kinetic energy approach recently introduced in the literature. A number of properties and insights are obtained in particular the emergence of oscillating entropy and nonlocal diffusion equations which are relevant to a number of physical and engineering problems. Several features are obtained and discussed in details.
NASA Astrophysics Data System (ADS)
Fellner, Klemens; Tang, Bao Quoc
2018-06-01
The convergence to equilibrium for renormalised solutions to nonlinear reaction-diffusion systems is studied. The considered reaction-diffusion systems arise from chemical reaction networks with mass action kinetics and satisfy the complex balanced condition. By applying the so-called entropy method, we show that if the system does not have boundary equilibria, i.e. equilibrium states lying on the boundary of R_+^N, then any renormalised solution converges exponentially to the complex balanced equilibrium with a rate, which can be computed explicitly up to a finite-dimensional inequality. This inequality is proven via a contradiction argument and thus not explicitly. An explicit method of proof, however, is provided for a specific application modelling a reversible enzyme reaction by exploiting the specific structure of the conservation laws. Our approach is also useful to study the trend to equilibrium for systems possessing boundary equilibria. More precisely, to show the convergence to equilibrium for systems with boundary equilibria, we establish a sufficient condition in terms of a modified finite-dimensional inequality along trajectories of the system. By assuming this condition, which roughly means that the system produces too much entropy to stay close to a boundary equilibrium for infinite time, the entropy method shows exponential convergence to equilibrium for renormalised solutions to complex balanced systems with boundary equilibria.
An artificial nonlinear diffusivity method for supersonic reacting flows with shocks
NASA Astrophysics Data System (ADS)
Fiorina, B.; Lele, S. K.
2007-03-01
A computational approach for modeling interactions between shocks waves, contact discontinuities and reactions zones with a high-order compact scheme is investigated. To prevent the formation of spurious oscillations around shocks, artificial nonlinear viscosity [A.W. Cook, W.H. Cabot, A high-wavenumber viscosity for high resolution numerical method, J. Comput. Phys. 195 (2004) 594-601] based on high-order derivative of the strain rate tensor is used. To capture temperature and species discontinuities a nonlinear diffusivity based on the entropy gradient is added. It is shown that the damping of 'wiggles' is controlled by the model constants and is largely independent of the mesh size and the shock strength. The same holds for the numerical shock thickness and allows a determination of the L2 error. In the shock tube problem, with fluids of different initial entropy separated by the diaphragm, an artificial diffusivity is required to accurately capture the contact surface. Finally, the method is applied to a shock wave propagating into a medium with non-uniform density/entropy and to a CJ detonation wave. Multi-dimensional formulation of the model is presented and is illustrated by a 2D oblique wave reflection from an inviscid wall, by a 2D supersonic blunt body flow and by a Mach reflection problem.
Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Safari, Mehdi
Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
2013-01-01
A new variational principle of steady states is found by introducing an integrated type of energy dissipation (or entropy production) instead of instantaneous energy dissipation. This new principle is valid both in linear and nonlinear transport phenomena. Prigogine’s dream has now been realized by this new general principle of minimum “integrated” entropy production (or energy dissipation). This new principle does not contradict with the Onsager-Prigogine principle of minimum instantaneous entropy production in the linear regime, but it is conceptually different from the latter which does not hold in the nonlinear regime. Applications of this theory to electric conduction, heat conduction, particle diffusion and chemical reactions are presented. The irreversibility (or positive entropy production) and long time tail problem in Kubo’s formula are also discussed in the Introduction and last section. This constitutes the complementary explanation of our theory of entropy production given in the previous papers (M. Suzuki, Physica A 390 (2011) 1904 and M. Suzuki, Physica A 391 (2012) 1074) and has given the motivation of the present investigation of variational principle.
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
NASA Astrophysics Data System (ADS)
Sadeghi, Pegah; Safavinejad, Ali
2017-11-01
Radiative entropy generation through a gray absorbing, emitting, and scattering planar medium at radiative equilibrium with diffuse-gray walls is investigated. The radiative transfer equation and radiative entropy generation equations are solved using discrete ordinates method. Components of the radiative entropy generation are considered for two different boundary conditions: two walls are at a prescribed temperature and mixed boundary conditions, which one wall is at a prescribed temperature and the other is at a prescribed heat flux. The effect of wall emissivities, optical thickness, single scattering albedo, and anisotropic-scattering factor on the entropy generation is attentively investigated. The results reveal that entropy generation in the system mainly arises from irreversible radiative transfer at wall with lower temperature. Total entropy generation rate for the system with prescribed temperature at walls remarkably increases as wall emissivity increases; conversely, for system with mixed boundary conditions, total entropy generation rate slightly decreases. Furthermore, as the optical thickness increases, total entropy generation rate remarkably decreases for the system with prescribed temperature at walls; nevertheless, for the system with mixed boundary conditions, total entropy generation rate increases. The variation of single scattering albedo does not considerably affect total entropy generation rate. This parametric analysis demonstrates that the optical thickness and wall emissivities have a significant effect on the entropy generation in the system at radiative equilibrium. Considering the parameters affecting radiative entropy generation significantly, provides an opportunity to optimally design or increase overall performance and efficiency by applying entropy minimization techniques for the systems at radiative equilibrium.
Diffusion models for innovation: s-curves, networks, power laws, catastrophes, and entropy.
Jacobsen, Joseph J; Guastello, Stephen J
2011-04-01
This article considers models for the diffusion of innovation would be most relevant to the dynamics of early 21st century technologies. The article presents an overview of diffusion models and examines the adoption S-curve, network theories, difference models, influence models, geographical models, a cusp catastrophe model, and self-organizing dynamics that emanate from principles of network configuration and principles of heat diffusion. The diffusion dynamics that are relevant to information technologies and energy-efficient technologies are compared. Finally, principles of nonlinear dynamics for innovation diffusion that could be used to rehabilitate the global economic situation are discussed.
On the Consequences of Clausius-Duhem Inequality for Electrolyte Solutions
NASA Astrophysics Data System (ADS)
Reis, Martina; Bassi, Adalberto Bono Maurizio Sacchi
2014-03-01
Based on the fundamentals of thermo-statics, non-equilibrium thermodynamics theories frequently employ an entropy inequality, where the entropy flux is collinear to the heat flux, and the entropy supply is proportional to the energy supply. Although this assumption is suitable for many material bodies, e.g. heat-conducting viscous fluids, there is a class of materials for which these assumptions are not valid. By assuming that the entropy flux and the entropy supply are constitutive quantities, in this work it is demonstrated that the entropy flux for a reacting ionic mixture of non-volatile solutes presents a non-collinear term due to the diffusive fluxes. The consequences of the collinearity between the entropy flux and the heat flux, as well as the proportionality of the entropy supply and the energy supply on the stability of chemical systems are also investigated. Furthermore, by considering an electrolyte solution of non-volatile solutes in phase equilibrium with water vapor, and the constitutive nature of the entropy flux, the stability of a vapor-electrolyte solution interface is studied. Despite this work only deals with electrolyte solutions, the results presented can be easily extended to more complex chemical reacting systems. The first author acknowledges financial support from CNPq (National Counsel of Technological and Scientific Development).
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barletti, Luigi, E-mail: luigi.barletti@unifi.it
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
Wavelet bases on the L-shaped domain
NASA Astrophysics Data System (ADS)
Jouini, Abdellatif; Lemarié-Rieusset, Pierre Gilles
2013-07-01
We present in this paper two elementary constructions of multiresolution analyses on the L-shaped domain D. In the first one, we shall describe a direct method to define an orthonormal multiresolution analysis. In the second one, we use the decomposition method for constructing a biorthogonal multiresolution analysis. These analyses are adapted for the study of the Sobolev spaces Hs(D)(s∈N).
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Scaling of the entropy budget with surface temperature in radiative-convective equilibrium
NASA Astrophysics Data System (ADS)
Singh, Martin S.; O'Gorman, Paul A.
2016-09-01
The entropy budget of the atmosphere is examined in simulations of radiative-convective equilibrium with a cloud-system resolving model over a wide range of surface temperatures from 281 to 311 K. Irreversible phase changes and the diffusion of water vapor account for more than half of the irreversible entropy production within the atmosphere, even in the coldest simulation. As the surface temperature is increased, the atmospheric radiative cooling rate increases, driving a greater entropy sink that must be matched by greater irreversible entropy production. The entropy production resulting from irreversible moist processes increases at a similar fractional rate as the entropy sink and at a lower rate than that implied by Clausius-Clapeyron scaling. This allows the entropy production from frictional drag on hydrometeors and on the atmospheric flow to also increase with warming, in contrast to recent results for simulations with global climate models in which the work output decreases with warming. A set of approximate scaling relations is introduced for the terms in the entropy budget as the surface temperature is varied, and many of the terms are found to scale with the mean surface precipitation rate. The entropy budget provides some insight into changes in frictional dissipation in response to warming or changes in model resolution, but it is argued that frictional dissipation is not closely linked to other measures of convective vigor.
Thermalization of entanglement.
Zhang, Liangsheng; Kim, Hyungwon; Huse, David A
2015-06-01
We explore the dynamics of the entanglement entropy near equilibrium in highly entangled pure states of two quantum-chaotic spin chains undergoing unitary time evolution. We examine the relaxation to equilibrium from initial states with either less or more entanglement entropy than the equilibrium value, as well as the dynamics of the spontaneous fluctuations of the entanglement that occur in equilibrium. For the spin chain with a time-independent Hamiltonian and thus an extensive conserved energy, we find slow relaxation of the entanglement entropy near equilibration. Such slow relaxation is absent in a Floquet spin chain with a Hamiltonian that is periodic in time and thus has no local conservation law. Therefore, we argue that slow diffusive energy transport is responsible for the slow relaxation of the entanglement entropy in the Hamiltonian system.
Automated diagnosis of interstitial lung diseases and emphysema in MDCT imaging
NASA Astrophysics Data System (ADS)
Fetita, Catalin; Chang Chien, Kuang-Che; Brillet, Pierre-Yves; Prêteux, Françoise
2007-09-01
Diffuse lung diseases (DLD) include a heterogeneous group of non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyabe, Kanji; Guiochon, Georges A
2005-06-01
Surface diffusion on adsorbents made of silica gels bonded to C{sub 1}, C{sub 4}, C{sub 8}, and C{sub 18} alkyl ligands was studied in reversed-phase liquid chromatography (RPLC) from the viewpoints of two extrathermodynamic relationships: enthalpy-entropy compensation (EEC) and linear free-energy relationship (LFER). First, the values of the surface diffusion coefficient (D{sub s}), normalized by the density of the alkyl ligands, were analyzed with the modified Arrhenius equation, following the four approaches proposed in earlier research. This showed that an actual EEC resulting from substantial physicochemical effects occurs for surface diffusion and suggested a mechanistic similarity of molecular migration bymore » surface diffusion, irrespective of the alkyl chain length. Second, a new model based on EEC was derived to explain the LFER between the logarithms of D{sub s} measured under different RPLC conditions. This showed that the changes of free energy, enthalpy, and entropy of surface diffusion are linearly correlated with the carbon number in the alkyl ligands of the bonded phases and that the contribution of the C{sub 18} ligand to the changes of the thermodynamic parameters corresponds to that of the C{sub 10} ligand. The new LFER model correlates the slope and intercept of the LFER to the compensation temperatures derived from the EEC analyses and to several parameters characterizing the molecular contributions to the changes in enthalpy and entropy. Finally, the new model was used to estimate D{sub s} under various RPLC conditions. The values of D{sub s} that were estimated from only two original experimental D{sub s} data were in agreement with corresponding experimental D{sub s} values, with relative errors of {approx}20%, irrespective of some RPLC conditions.« less
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bremer, P-T; Edelsbrunner, H; Hamann, B
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
Entropy production of active particles and for particles in active baths
NASA Astrophysics Data System (ADS)
Pietzonka, Patrick; Seifert, Udo
2018-01-01
Entropy production of an active particle in an external potential is identified through a thermodynamically consistent minimal lattice model that includes the chemical reaction providing the propulsion and ordinary translational noise. In the continuum limit, a unique expression follows, comprising a direct contribution from the active process and an indirect contribution from ordinary diffusive motion. From the corresponding Langevin equation, this physical entropy production cannot be inferred through the conventional, yet here ambiguous, comparison of forward and time-reversed trajectories. Generalizations to several interacting active particles and passive particles in a bath of active ones are presented explicitly, further ones are briefly indicated.
Entropy in sound and vibration: towards a new paradigm.
Le Bot, A
2017-01-01
This paper describes a discussion on the method and the status of a statistical theory of sound and vibration, called statistical energy analysis (SEA). SEA is a simple theory of sound and vibration in elastic structures that applies when the vibrational energy is diffusely distributed. We show that SEA is a thermodynamical theory of sound and vibration, based on a law of exchange of energy analogous to the Clausius principle. We further investigate the notion of entropy in this context and discuss its meaning. We show that entropy is a measure of information lost in the passage from the classical theory of sound and vibration and SEA, its thermodynamical counterpart.
Compression of the Global Land 1-km AVHRR dataset
Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.
1996-01-01
Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.
Adaptive multi-resolution Modularity for detecting communities in networks
NASA Astrophysics Data System (ADS)
Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He
2018-02-01
Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
NASA Astrophysics Data System (ADS)
Chen, Lulin; Chen, Chang W.; Parker, Kevin J.
1997-04-01
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
Multifractal diffusion entropy analysis: Optimal bin width of probability histograms
NASA Astrophysics Data System (ADS)
Jizba, Petr; Korbel, Jan
2014-11-01
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.
Excess Entropy Scaling Law for Diffusivity in Liquid Metals
Jakse, N.; Pasturel, A.
2016-01-01
Understanding how dynamic properties depend on the structure and thermodynamics in liquids is a long-standing open problem in condensed matter physics. A very simple approach is based on the Dzugutov contribution developed on model fluids in which a universal (i.e. species-independent) connection relates the pair excess entropy of a liquid to its reduced diffusion coefficient. However its application to “real” liquids still remains uncertain due to the ability of a hard sphere (HS) reference fluid used in reducing parameters to describe complex interactions that occur in these liquids. Here we use ab initio molecular dynamics simulations to calculate both structural and dynamic properties at different temperatures for a wide series of liquid metals including Al, Au, Cu, Li, Ni, Ta, Ti, Zn as well as liquid Si and B. From this analysis, we demonstrate that the Dzugutov scheme can be applied successfully if a self-consistent method to determine the packing fraction of the hard sphere reference fluid is used as well as the Carnahan-Starling approach to express the excess entropy. PMID:26862002
Chang, Shou-Yi; Li, Chen-En; Huang, Yi-Chung; Hsu, Hsun-Feng; Yeh, Jien-Wei; Lin, Su-Jien
2014-01-01
We report multi-component high-entropy materials as extraordinarily robust diffusion barriers and clarify the highly suppressed interdiffusion kinetics in the multi-component materials from structural and thermodynamic perspectives. The failures of six alloy barriers with different numbers of elements, from unitary Ti to senary TiTaCrZrAlRu, against the interdiffusion of Cu and Si were characterized, and experimental results indicated that, with more elements incorporated, the failure temperature of the barriers increased from 550 to 900°C. The activation energy of Cu diffusion through the alloy barriers was determined to increase from 110 to 163 kJ/mole. Mechanistic analyses suggest that, structurally, severe lattice distortion strains and a high packing density caused by different atom sizes, and, thermodynamically, a strengthened cohesion provide a total increase of 55 kJ/mole in the activation energy of substitutional Cu diffusion, and are believed to be the dominant factors of suppressed interdiffusion kinetics through the multi-component barrier materials. PMID:24561911
Ding, Guangliang; Chen, Jieli; Chopp, Michael; Li, Lian; Yan, Tao; Davoodi-Bojd, Esmaeil; Li, Qingjiang; Davarani, Siamak Pn; Jiang, Quan
2017-01-01
Diffusion-related magnetic resonance imaging parametric maps may be employed to characterize white matter of brain. We hypothesize that entropy of diffusion anisotropy may be most effective for detecting therapeutic effects of bone marrow stromal cell treatment of ischemia in type 2 diabetes mellitus rats. Type 2 diabetes mellitus was induced in adult male Wistar rats. These rats were then subjected to 2 h of middle cerebral artery occlusion, and received bone marrow stromal cell (5 × 10 6 , n = 8) or an equal volume of saline (n = 8) via tail vein injection at three days after middle cerebral artery occlusion. Magnetic resonance imaging was performed on day one and then weekly for five weeks post middle cerebral artery occlusion. The diffusion metrics complementarily permitted characterization of axons and axonal myelination. All six magnetic resonance imaging diffusion metrics, confirmed by histological measures, demonstrated that bone marrow stromal cell treatment significantly (p < 0.05) improved magnetic resonance imaging diffusion indices of white matter in type 2 diabetes mellitus rats after middle cerebral artery occlusion compared with the saline-treated rats. Superior to the fractional anisotropy metric that provided measures related to organization of neuronal fiber bundles, the entropy metric can also identify microstructures and low-density axonal fibers of cerebral tissue after stroke in type 2 diabetes mellitus rats. © The Author(s) 2015.
Ding, Guangliang; Chen, Jieli; Chopp, Michael; Li, Lian; Yan, Tao; Davoodi-Bojd, Esmaeil; Li, Qingjiang; Davarani, Siamak PN
2015-01-01
Diffusion-related magnetic resonance imaging parametric maps may be employed to characterize white matter of brain. We hypothesize that entropy of diffusion anisotropy may be most effective for detecting therapeutic effects of bone marrow stromal cell treatment of ischemia in type 2 diabetes mellitus rats. Type 2 diabetes mellitus was induced in adult male Wistar rats. These rats were then subjected to 2 h of middle cerebral artery occlusion, and received bone marrow stromal cell (5 × 106, n = 8) or an equal volume of saline (n = 8) via tail vein injection at three days after middle cerebral artery occlusion. Magnetic resonance imaging was performed on day one and then weekly for five weeks post middle cerebral artery occlusion. The diffusion metrics complementarily permitted characterization of axons and axonal myelination. All six magnetic resonance imaging diffusion metrics, confirmed by histological measures, demonstrated that bone marrow stromal cell treatment significantly (p < 0.05) improved magnetic resonance imaging diffusion indices of white matter in type 2 diabetes mellitus rats after middle cerebral artery occlusion compared with the saline-treated rats. Superior to the fractional anisotropy metric that provided measures related to organization of neuronal fiber bundles, the entropy metric can also identify microstructures and low-density axonal fibers of cerebral tissue after stroke in type 2 diabetes mellitus rats. PMID:26685128
Free energy and entropy of a dipolar liquid by computer simulations
NASA Astrophysics Data System (ADS)
Palomar, Ricardo; Sesé, Gemma
2018-02-01
Thermodynamic properties for a system composed of dipolar molecules are computed. Free energy is evaluated by means of the thermodynamic integration technique, and it is also estimated by using a perturbation theory approach, in which every molecule is modeled as a hard sphere within a square well, with an electric dipole at its center. The hard sphere diameter, the range and depth of the well, and the dipole moment have been calculated from properties easily obtained in molecular dynamics simulations. Connection between entropy and dynamical properties is explored in the liquid and supercooled states by using instantaneous normal mode calculations. A model is proposed in order to analyze translation and rotation contributions to entropy separately. Both contributions decrease upon cooling, and a logarithmic correlation between excess entropy associated with translation and the corresponding proportion of imaginary frequency modes is encountered. Rosenfeld scaling law between reduced diffusion and excess entropy is tested, and the origin of its failure at low temperatures is investigated.
Zhao, Yong; Hong, Wen-Xue
2011-11-01
Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.
Measuring the Performance and Intelligence of Systems: Proceedings of the 2001 PerMIS Workshop
2001-09-04
35 1.1 Interval Mathematics for Analysis of Multiresolutional Systems V. Kreinovich, Univ. of Texas, R. Alo, Univ. of Houston-Downtown...the possible combinations. In non-deterministic real- time systems , the problem is compounded by the uncertainty in the execution times of various...multiresolutional, multiscale ) in their essence because of multiresolutional character of the meaning of words [Rieger, 01]. In integrating systems , the presence of a
Techniques and potential capabilities of multi-resolutional information (knowledge) processing
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.
Multiresolution analysis of characteristic length scales with high-resolution topographic data
NASA Astrophysics Data System (ADS)
Sangireddy, Harish; Stark, Colin P.; Passalacqua, Paola
2017-07-01
Characteristic length scales (CLS) define landscape structure and delimit geomorphic processes. Here we use multiresolution analysis (MRA) to estimate such scales from high-resolution topographic data. MRA employs progressive terrain defocusing, via convolution of the terrain data with Gaussian kernels of increasing standard deviation, and calculation at each smoothing resolution of (i) the probability distributions of curvature and topographic index (defined as the ratio of slope to area in log scale) and (ii) characteristic spatial patterns of divergent and convergent topography identified by analyzing the curvature of the terrain. The MRA is first explored using synthetic 1-D and 2-D signals whose CLS are known. It is then validated against a set of MARSSIM (a landscape evolution model) steady state landscapes whose CLS were tuned by varying hillslope diffusivity and simulated noise amplitude. The known CLS match the scales at which the distributions of topographic index and curvature show scaling breaks, indicating that the MRA can identify CLS in landscapes based on the scaling behavior of topographic attributes. Finally, the MRA is deployed to measure the CLS of five natural landscapes using meter resolution digital terrain model data. CLS are inferred from the scaling breaks of the topographic index and curvature distributions and equated with (i) small-scale roughness features and (ii) the hillslope length scale.
Entropy in sound and vibration: towards a new paradigm
2017-01-01
This paper describes a discussion on the method and the status of a statistical theory of sound and vibration, called statistical energy analysis (SEA). SEA is a simple theory of sound and vibration in elastic structures that applies when the vibrational energy is diffusely distributed. We show that SEA is a thermodynamical theory of sound and vibration, based on a law of exchange of energy analogous to the Clausius principle. We further investigate the notion of entropy in this context and discuss its meaning. We show that entropy is a measure of information lost in the passage from the classical theory of sound and vibration and SEA, its thermodynamical counterpart. PMID:28265190
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Schob, Stefan; Beeskow, Anne; Dieckow, Julia; Meyer, Hans-Jonas; Krause, Matthias; Frydrychowicz, Clara; Hirsch, Franz-Wolfgang; Surov, Alexey
2018-05-31
Medulloblastomas are the most common central nervous system tumors in childhood. Treatment and prognosis strongly depend on histology and transcriptomic profiling. However, the proliferative potential also has prognostical value. Our study aimed to investigate correlations between histogram profiling of diffusion-weighted images and further microarchitectural features. Seven patients (age median 14.6 years, minimum 2 years, maximum 20 years; 5 male, 2 female) were included in this retrospective study. Using a Matlab-based analysis tool, histogram analysis of whole apparent diffusion coefficient (ADC) volumes was performed. ADC entropy revealed a strong inverse correlation with the expression of the proliferation marker Ki67 (r = - 0.962, p = 0.009) and with total nuclear area (r = - 0.888, p = 0.044). Furthermore, ADC percentiles, most of all ADCp90, showed significant correlations with Ki67 expression (r = 0.902, p = 0.036). Diffusion histogram profiling of medulloblastomas provides valuable in vivo information which potentially can be used for risk stratification and prognostication. First of all, entropy revealed to be the most promising imaging biomarker. However, further studies are warranted.
The DOSY experiment provides insights into the protegrin-lipid interaction
NASA Astrophysics Data System (ADS)
Malliavin, T. E.; Louis, V.; Delsuc, M. A.
1998-02-01
The measure of translational diffusion using PFG NMR has known a renewal of interest with the development of the DOSY experiments. The extraction of diffusion coefficients from these experiments requires an inverse Laplace transform. We present here the use of the Maximum Entropy technique to perform this transform, and an application of this method to investigate the interaction protegrin-lipid. We show that the analysis by DOSY experiments permits to determine some of the interaction features. La mesure de diffusion translationnelle par gradients de champs pulsés en RMN a connu un regain d'intérêt avec le développement des expériences de DOSY. L'extraction de coefficients de diffusion à partir de ces expériences nécessite l'application d'une transformée de Laplace inverse. Nous présentons ici l'utilisation de la méthode d'Entropie Maximum pour effectuer cette transformée, ainsi qu'une application de l'expérience de DOSY pour étudier une interaction protégrine-lipide. Nous montrons que l'analyse par l'expérience de DOSY permet de déterminer certaines des caractéristiques de cette interaction.
NASA Astrophysics Data System (ADS)
Xiang-Guo, Meng; Hong-Yi, Fan; Ji-Suo, Wang
2018-04-01
This paper proposes a kind of displaced thermal states (DTS) and explores how this kind of optical field emerges using the entangled state representation. The results show that the DTS can be generated by a coherent state passing through a diffusion channel with the diffusion coefficient ϰ only when there exists κ t = (e^{\\hbar ν /kBT} - 1 )^{-1}. Also, its statistical properties, such as mean photon number, Wigner function and entropy, are investigated.
Vázquez, J. L.
2010-01-01
The goal of this paper is to state the optimal decay rate for solutions of the nonlinear fast diffusion equation and, in self-similar variables, the optimal convergence rates to Barenblatt self-similar profiles and their generalizations. It relies on the identification of the optimal constants in some related Hardy–Poincaré inequalities and concludes a long series of papers devoted to generalized entropies, functional inequalities, and rates for nonlinear diffusion equations. PMID:20823259
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
2013-10-01
cancer for improving the overall specificity. Our recent work has focused on testing retrospective Maximum Entropy and Compressed Sensing of the 4D...terparts and increases the entropy or sparsity of the reconstructed spectrum by narrowing the peak linewidths and de -noising smaller features. This, in...tightened’ beyond the standard de - viation of the noise in an effort to reduce the RMSE and reconstruc- tion non-linearity, but this prevents the
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Meng, Jie; Zhu, Lijing; Zhu, Li; Wang, Huanhuan; Liu, Song; Yan, Jing; Liu, Baorui; Guan, Yue; Ge, Yun; He, Jian; Zhou, Zhengyang; Yang, Xiaofeng
2016-10-22
To explore the role of apparent diffusion coefficient (ADC) histogram shape related parameters in early assessment of treatment response during the concurrent chemo-radiotherapy (CCRT) course of advanced cervical cancers. This prospective study was approved by the local ethics committee and informed consent was obtained from all patients. Thirty-two patients with advanced cervical squamous cell carcinomas underwent diffusion weighted magnetic resonance imaging (b values, 0 and 800 s/mm 2 ) before CCRT, at the end of 2nd and 4th week during CCRT and immediately after CCRT completion. Whole lesion ADC histogram analysis generated several histogram shape related parameters including skewness, kurtosis, s-sD av , width, standard deviation, as well as first-order entropy and second-order entropies. The averaged ADC histograms of 32 patients were generated to visually observe dynamic changes of the histogram shape following CCRT. All parameters except width and standard deviation showed significant changes during CCRT (all P < 0.05), and their variation trends fell into four different patterns. Skewness and kurtosis both showed high early decline rate (43.10 %, 48.29 %) at the end of 2nd week of CCRT. All entropies kept decreasing significantly since 2 weeks after CCRT initiated. The shape of averaged ADC histogram also changed obviously following CCRT. ADC histogram shape analysis held the potential in monitoring early tumor response in patients with advanced cervical cancers undergoing CCRT.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.
2016-10-01
Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the perspective of computational accuracy and efficiency.
Detecting causality in policy diffusion processes.
Grabow, Carsten; Macinko, James; Silver, Diana; Porfiri, Maurizio
2016-08-01
A universal question in network science entails learning about the topology of interaction from collective dynamics. Here, we address this question by examining diffusion of laws across US states. We propose two complementary techniques to unravel determinants of this diffusion process: information-theoretic union transfer entropy and event synchronization. In order to systematically investigate their performance on law activity data, we establish a new stochastic model to generate synthetic law activity data based on plausible networks of interactions. Through extensive parametric studies, we demonstrate the ability of these methods to reconstruct networks, varying in size, link density, and degree heterogeneity. Our results suggest that union transfer entropy should be preferred for slowly varying processes, which may be associated with policies attending to specific local problems that occur only rarely or with policies facing high levels of opposition. In contrast, event synchronization is effective for faster enactment rates, which may be related to policies involving Federal mandates or incentives. This study puts forward a data-driven toolbox to explain the determinants of legal activity applicable to political science, across dynamical systems, information theory, and complex networks.
Detecting causality in policy diffusion processes
NASA Astrophysics Data System (ADS)
Grabow, Carsten; Macinko, James; Silver, Diana; Porfiri, Maurizio
2016-08-01
A universal question in network science entails learning about the topology of interaction from collective dynamics. Here, we address this question by examining diffusion of laws across US states. We propose two complementary techniques to unravel determinants of this diffusion process: information-theoretic union transfer entropy and event synchronization. In order to systematically investigate their performance on law activity data, we establish a new stochastic model to generate synthetic law activity data based on plausible networks of interactions. Through extensive parametric studies, we demonstrate the ability of these methods to reconstruct networks, varying in size, link density, and degree heterogeneity. Our results suggest that union transfer entropy should be preferred for slowly varying processes, which may be associated with policies attending to specific local problems that occur only rarely or with policies facing high levels of opposition. In contrast, event synchronization is effective for faster enactment rates, which may be related to policies involving Federal mandates or incentives. This study puts forward a data-driven toolbox to explain the determinants of legal activity applicable to political science, across dynamical systems, information theory, and complex networks.
Gihr, Georg Alexander; Horvath-Rizea, Diana; Garnov, Nikita; Kohlhof-Meinecke, Patricia; Ganslandt, Oliver; Henkes, Hans; Meyer, Hans Jonas; Hoffmann, Karl-Titus; Surov, Alexey; Schob, Stefan
2018-02-01
Presurgical grading, estimation of growth kinetics, and other prognostic factors are becoming increasingly important for selecting the best therapeutic approach for meningioma patients. Diffusion-weighted imaging (DWI) provides microstructural information and reflects tumor biology. A novel DWI approach, histogram profiling of apparent diffusion coefficient (ADC) volumes, provides more distinct information than conventional DWI. Therefore, our study investigated whether ADC histogram profiling distinguishes low-grade from high-grade lesions and reflects Ki-67 expression and progesterone receptor status. Pretreatment ADC volumes of 37 meningioma patients (28 low-grade, 9 high-grade) were used for histogram profiling. WHO grade, Ki-67 expression, and progesterone receptor status were evaluated. Comparative and correlative statistics investigating the association between histogram profiling and neuropathology were performed. The entire ADC profile (p10, p25, p75, p90, mean, median) was significantly lower in high-grade versus low-grade meningiomas. The lower percentiles, mean, and modus showed significant correlations with Ki-67 expression. Skewness and entropy of the ADC volumes were significantly associated with progesterone receptor status and Ki-67 expression. ROC analysis revealed entropy to be the most accurate parameter distinguishing low-grade from high-grade meningiomas. ADC histogram profiling provides a distinct set of parameters, which help differentiate low-grade versus high-grade meningiomas. Also, histogram metrics correlate significantly with histological surrogates of the respective proliferative potential. More specifically, entropy revealed to be the most promising imaging biomarker for presurgical grading. Both, entropy and skewness were significantly associated with progesterone receptor status and Ki-67 expression and therefore should be investigated further as predictors for prognostically relevant tumor biological features. Since absolute ADC values vary between MRI scanners of different vendors and field strengths, their use is more limited in the presurgical setting.
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...
2016-01-01
We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Jin, Ke; Zhang, Chuan; Zhang, Fan; ...
2018-03-07
To investigate the compositional effects on thermal-diffusion kinetics in concentrated solid-solution alloys, interdiffusion in seven diffusion couples with alloys from binary to quinary is systematically studied. The alloys with higher compositional complexity exhibit in general lower diffusion coefficients against homologous temperature, however, an exception is found that diffusion in NiCoFeCrPd is faster than in NiCoFeCr and NiCoCr. While the derived diffusion parameters suggest that diffusion in medium and high entropy alloys is overall more retarded than in pure metals and binary alloys, they strongly depend on specific constituents. The comparative features are captured by computational thermodynamics approaches using a self-consistentmore » database.« less
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Computer Science Techniques Applied to Parallel Atomistic Simulation
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro
1998-03-01
Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-01-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2015-10-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-09-03
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
Gradient-based multiresolution image fusion.
Petrović, Valdimir S; Xydeas, Costas S
2004-02-01
A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Fogtmann, Mads; Seshamani, Sharmishtaa; Kroenke, Christopher; Cheng, Xi; Chapman, Teresa; Wilm, Jakob; Rousseau, François
2014-01-01
This paper presents an approach to 3-D diffusion tensor image (DTI) reconstruction from multi-slice diffusion weighted (DW) magnetic resonance imaging acquisitions of the moving fetal brain. Motion scatters the slice measurements in the spatial and spherical diffusion domain with respect to the underlying anatomy. Previous image registration techniques have been described to estimate the between slice fetal head motion, allowing the reconstruction of 3-D a diffusion estimate on a regular grid using interpolation. We propose Approach to Unified Diffusion Sensitive Slice Alignment and Reconstruction (AUDiSSAR) that explicitly formulates a process for diffusion direction sensitive DW-slice-to-DTI-volume alignment. This also incorporates image resolution modeling to iteratively deconvolve the effects of the imaging point spread function using the multiple views provided by thick slices acquired in different anatomical planes. The algorithm is implemented using a multi-resolution iterative scheme and multiple real and synthetic data are used to evaluate the performance of the technique. An accuracy experiment using synthetically created motion data of an adult head and a experiment using synthetic motion added to sedated fetal monkey dataset show a significant improvement in motion-trajectory estimation compared to a state-of-the-art approaches. The performance of the method is then evaluated on challenging but clinically typical in utero fetal scans of four different human cases, showing improved rendition of cortical anatomy and extraction of white matter tracts. While the experimental work focuses on DTI reconstruction (second-order tensor model), the proposed reconstruction framework can employ any 5-D diffusion volume model that can be represented by the spatial parameterizations of an orientation distribution function. PMID:24108711
Economics and Maximum Entropy Production
NASA Astrophysics Data System (ADS)
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Dynamics of two-dimensional monolayer water confined in hydrophobic and charged environments.
Kumar, Pradeep; Han, Sungho
2012-09-21
We perform molecular dynamics simulations to study the effect of charged surfaces on the intermediate and long time dynamics of water in nanoconfinements. Here, we use the transferable interaction potential with five points (TIP5P) model of a water molecule confined in both hydrophobic and charged surfaces. For a single molecular layer of water between the surfaces, we find that the temperature dependence of the lateral diffusion constant of water up to very high temperatures remains Arrhenius with a high activation energy. In case of charged surfaces, however, the dynamics of water in the intermediate time regime is drastically modified presumably due to the transient coupling of dipoles of water molecules with electric field fluctuations induced by charges on the confining surfaces. Specifically, the lateral mean square displacements display a distinct super-diffusive behavior at intermediate time scale, defined as the time scale between ballistic and diffusive regimes. This change in the intermediate time-scale dynamics in the charged confinement leads to the enhancement of long-time dynamics as reflected in increasing diffusion constant. We introduce a simple model for a possible explanation of the super-diffusive behavior and find it to be in good agreement with our simulation results. Furthermore, we find that confinement and the surface polarity enhance the low frequency vibration in confinement compared to bulk water. By introducing a new effective length scale of coupling between translational and orientational motions, we find that the length scale increases with the increasing strength of the surface polarity. Further, we calculate the correlation between the diffusion constant and the excess entropy and find a disordering effect of polar surfaces on the structure of water. Finally, we find that the empirical relation between the diffusion constant and the excess entropy holds for a monolayer of water in nanoconfinement.
Thermodynamics of viscoelastic rate-type fluids with stress diffusion
NASA Astrophysics Data System (ADS)
Málek, Josef; Průša, Vít; Skřivan, Tomáš; Süli, Endre
2018-02-01
We propose thermodynamically consistent models for viscoelastic fluids with a stress diffusion term. In particular, we derive variants of compressible/incompressible Maxwell/Oldroyd-B models with a stress diffusion term in the evolution equation for the extra stress tensor. It is shown that the stress diffusion term can be interpreted either as a consequence of a nonlocal energy storage mechanism or as a consequence of a nonlocal entropy production mechanism, while different interpretations of the stress diffusion mechanism lead to different evolution equations for the temperature. The benefits of the knowledge of the thermodynamical background of the derived models are documented in the study of nonlinear stability of equilibrium rest states. The derived models open up the possibility to study fully coupled thermomechanical problems involving viscoelastic rate-type fluids with stress diffusion.
NASA Astrophysics Data System (ADS)
Chávez, Yoshua; Chacón-Acosta, Guillermo; Dagdug, Leonardo
2018-05-01
Axial diffusion in channels and tubes of smoothly-varying geometry can be approximately described as one-dimensional diffusion in the entropy potential with a position-dependent effective diffusion coefficient, by means of the modified Fick–Jacobs equation. In this work, we derive analytical expressions for the position-dependent effective diffusivity for two-dimensional asymmetric varying-width channels, and for three-dimensional curved midline tubes, formed by straight walls. To this end, we use a recently developed theoretical framework using the Frenet–Serret moving frame as the coordinate system (2016 J. Chem. Phys. 145 074105). For narrow tubes and channels, an effective one-dimensional description reducing the diffusion equation to a Fick–Jacobs-like equation in general coordinates is used. From this last equation, one can calculate the effective diffusion coefficient applying Neumann boundary conditions.
A cross-diffusion system derived from a Fokker-Planck equation with partial averaging
NASA Astrophysics Data System (ADS)
Jüngel, Ansgar; Zamponi, Nicola
2017-02-01
A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-01-01
A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.
Alexandrescu, A. T.; Rathgeb-Szabo, K.; Rumpel, K.; Jahnke, W.; Schulthess, T.; Kammerer, R. A.
1998-01-01
Backbone 15N relaxation parameters (R1, R2, 1H-15N NOE) have been measured for a 22-residue recombinant variant of the S-peptide in its free and S-protein bound forms. NMR relaxation data were analyzed using the "model-free" approach (Lipari & Szabo, 1982). Order parameters obtained from "model-free" simulations were used to calculate 1H-15N bond vector entropies using a recently described method (Yang & Kay, 1996), in which the form of the probability density function for bond vector fluctuations is derived from a diffusion-in-a-cone motional model. The average change in 1H-15N bond vector entropies for residues T3-S15, which become ordered upon binding of the S-peptide to the S-protein, is -12.6+/-1.4 J/mol.residue.K. 15N relaxation data suggest a gradient of decreasing entropy values moving from the termini toward the center of the free peptide. The difference between the entropies of the terminal and central residues is about -12 J/mol residue K, a value comparable to that of the average entropy change per residue upon complex formation. Similar entropy gradients are evident in NMR relaxation studies of other denatured proteins. Taken together, these observations suggest denatured proteins may contain entropic contributions from non-local interactions. Consequently, calculations that model the entropy of a residue in a denatured protein as that of a residue in a di- or tri-peptide, might over-estimate the magnitude of entropy changes upon folding. PMID:9521116
Evaluation of the entropy consistent euler flux on 1D and 2D test problems
NASA Astrophysics Data System (ADS)
Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad
2012-06-01
Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
NASA Astrophysics Data System (ADS)
Ashworth, J. R.; Sheplev, V. S.
1997-09-01
Layered coronas between two reactant minerals can, in many cases, be attributed to diffusion-controlled growth with local equilibrium. This paper clarifies and unifies the previous approaches of various authors to the simplest form of modelling, which uses no assumed values for thermochemical quantities. A realistic overall reaction must be estimated from measured overall proportions of minerals and their major element compositions. Modelling is not restricted to a particular number of components S, relative to the number of phases Φ. IfΦ > S + 1, the overall reaction is a combination of simultaneous reactions. The stepwise method, solving for the local reaction at each boundary in turn, is extended to allow for recurrence of a mineral (its presence in two parts of the layer structure separated by a gap). The equations are also given in matrix form. A thermodynamic stability criterion is derived, determining which layer sequence is truly stable if several are computable from the same inputs. A layer structure satisfying the stability criterion has greater growth rate (and greater rate of entropy production) than the other computable layer sequences. This criterion of greatest entropy production is distinct from Prigogine's theorem of minimum entropy production, which distinguishes the stationary or quasi-stationary state from other states of the same layer sequence. The criterion leads to modification of previous results for coronas comprising hornblende, spinel, and orthopyroxene between olivine (Ol) and plagioclase (Pl). The outcome supports the previous inference that Si, and particularly Al, commonly behave as immobile relative to other cation-forming major elements. The affinity (-ΔG) of a corona-forming reaction is estimated, using previous estimates of diffusion coefficient and the duration t of reaction, together with a new model quantity (-ΔG) *. For an example of the Ol + Pl reaction, a rough calculation gives (-ΔG) > 1.7RT (per mole of P1 consumed, based on a 24-oxygen formula for Pl). At 600-700°C, this represents (-ΔG) > 10kJ mol -1 and departure from equilibrium temperature by at least ˜ 100°C. The lower end of this range is petrologically reasonable and, for t < 100Ma, corresponds to a Fick's-law diffusion coefficient for Al, DAl > 10 -25m 2s -1, larger than expected for lattice diffusion but consistent with fluid-absent grain-boundary diffusion and small concentration gradients.
Wave and pseudo-diffusion equations from squeezed states
NASA Technical Reports Server (NTRS)
Daboul, Jamil
1993-01-01
We show that the probability distributions P(sub n)(q,p;y) := the absolute value squared of (n(p,q;y), which are obtained from squeezed states, obey an interesting partial differential equation, to which we give two intuitive interpretations: as a wave equation in one space dimension; and as a pseudo-diffusion equation. We also study the corresponding Wehrl entropies S(sub n)(y), and we show that they have minima at zero squeezing, y = 0.
Entropy production and rectification efficiency in colloid transport along a pulsating channel
NASA Astrophysics Data System (ADS)
Florencia Carusela, M.; Rubi, J. Miguel
2018-06-01
We study the current rectification of particles moving in a pulsating channel under the influence of an applied force. We have shown the existence of different rectification scenarios in which entropic and energetic effects compete. The effect can be quantified by means of a rectification coefficient that is analyzed in terms of the force, the frequency and the diffusion coefficient. The energetic cost of the motion of the particles expressed in terms of the entropy production depends on the importance of the entropic contribution to the total force. Rectification is more important at low values of the applied force when entropic effects become dominant. In this regime, the entropy production is not invariant under reversal of the applied force. The phenomenon observed could be used to optimize transport in microfluidic devices or in biological channels.
Fluctuation theorem: A critical review
NASA Astrophysics Data System (ADS)
Malek Mansour, M.; Baras, F.
2017-10-01
Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.
NASA Astrophysics Data System (ADS)
Baig, Mohammad Saad; Chakraborty, Brahmananda; Ramaniah, Lavanya M.
2016-05-01
NaF-ZrF4 is used as a waste incinerator and as a coolant in Generation IV reactors.Structural and dynamical properties of molten NaF-ZrF4 system were studied along with Onsagercoefficients and Maxwell-Stefan (MS) Diffusivities applying Green-Kubo formalism and molecular dynamics (MD) simulations. The zirconium ions are found to be 8 fold coordinated with fluoride ions for all temperatures and concentrations. All the diffusive flux correlations show back-scattering. Even though the MS diffusivities are expected to depend very lightly on the composition because of decoupling of thermodynamic factor, the diffusivity ĐNa-F shows interesting behavior with the increase in concentration of ZrF4. This is because of network formation in NaF-ZrF4. Positive entropy constraints have been plotted to authenticate negative diffusivities observed.
Hinoue, Teruo; Ikeda, Eiji; Watariguchi, Shigeru; Kibune, Yasuyuki
2007-01-01
Thermal modulation voltammetry (TMV) with laser heating was successfully performed at an aqueous|nitrobenzene (NB) solution microinterface, by taking advantage of the fact that laser light with a wavelength of 325.0 nm is optically transparent to the aqueous solution but opaque to the NB solution. When the laser beam impinges upon the interface from the aqueous solution side, a temperature is raised around the interface through the thermal diffusion subsequent to the light-to-heat conversion following the optical absorption by the NB solution near the interface. Based on such a principle, we achieved a fluctuating temperature perturbation around the interface for TMV by periodically irradiating the interface with the laser beam. On the other hand, the fluctuating temperature perturbation has influence on currents for transfer of an ion across the interface to produce fluctuating currents synchronized with the perturbation through temperature coefficients of several variables concerning the transfer, such as the standard transfer potential and the diffusion coefficient of the ion. Consequently, TMV has the possibility of providing information about the standard entropy change of transfer corresponding to a temperature coefficient of the standard transfer potential and a temperature coefficient of the diffusion coefficient. In this work, the aqueous|NB solution interface of 30 microm in diameter was irradiated with the laser beam at 10 Hz, and the currents synchronized with the periodical irradiation were recorded as a function of the potential difference across the interface in order to construct a TM voltammogram. TM voltammograms were measured for transfer of tetramethylammonium, tetraethylammonium, tetrapropylammonium, and tetra-n-butylammonium ions from the aqueous solution to the NB solution, and the standard entropy change of transfer was determined for each ion, according to an analytical procedure based on a mathematical expression of the TM voltammogram. Comparison of the values obtained in this work with the literature values has proved that TMV with laser heating is available for the determination of the standard entropy change of transfer for an ion.
Multilevel Sequential2 Monte Carlo for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth
2018-09-01
The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.
NASA Astrophysics Data System (ADS)
Higuchi, Saki; Kato, Daiki; Awaji, Daisuke; Kim, Kang
2018-03-01
We present a study using molecular dynamics simulations based on the Fermi-Jagla potential model, which is the continuous version of the mono-atomic core-softened Jagla model [J. Y. Abraham, S. V. Buldyrev, and N. Giovambattista, J. Phys. Chem. B 115, 14229 (2011)]. This model shows the water-like liquid-liquid phase transition between high-density and low-density liquids at the liquid-liquid critical point. In particular, the slope of the coexistence line becomes weakly negative, which is expected to represent one of the anomalies of liquid polyamorphism. In this study, we examined the density, dynamic, and thermodynamic anomalies in the vicinity of the liquid-liquid critical point. The boundaries of density, self-diffusion, shear viscosity, and excess entropy anomalies were characterized. Furthermore, these anomalies are connected according to Rosenfeld's scaling relationship between the excess entropy and the transport coefficients such as diffusion and viscosity. The results demonstrate the hierarchical and nested structures regarding the thermodynamic and dynamic anomalies of the Fermi-Jagla model.
Free Energy Landscape of Cellulose as a Driving Factor in the Mobility of Adsorbed Water.
Kulasinski, Karol
2017-06-06
The diffusion coefficient of water adsorbed in hydrophilic porous materials, such as noncrystalline cellulose, depends on water activity. Faster diffusion at higher water concentrations is observed in experimental and modeling studies. In this paper, two asymptotic water concentrations, near-vacuum and fully saturated, are investigated at the surface of crystalline cellulose with molecular dynamics simulations. An increasing water concentration leads to significant changes in the free energy landscape due to perturbation of local electrostatic potential. Smoothening of strong energy minima, corresponding to sorption sites, and formation of layered structure facilitates water transport in the vicinity of cellulose. The determined transition probabilities and hydrogen bond stability reflect the changes in the energy landscape. As a result of a concentration increase, the emerging basins of attraction and spreading out of those existing in the diluted state lead to an increase in water entropy. Thermal fluctuations of cellulose are demonstrated to rearrange the landscape in the diluted limit, increase adsorbed water entropy, and decrease the water-cellulose H-bond lifetime.
Maximum Path Information and Fokker Planck Equation
NASA Astrophysics Data System (ADS)
Li, Wei; Wang A., Q.; LeMehaute, A.
2008-04-01
We present a rigorous method to derive the nonlinear Fokker-Planck (FP) equation of anomalous diffusion directly from a generalization of the principle of least action of Maupertuis proposed by Wang [Chaos, Solitons & Fractals 23 (2005) 1253] for smooth or quasi-smooth irregular dynamics evolving in Markovian process. The FP equation obtained may take two different but equivalent forms. It was also found that the diffusion constant may depend on both q (the index of Tsallis entropy [J. Stat. Phys. 52 (1988) 479] and the time t.
NASA Technical Reports Server (NTRS)
Colson, Russell O.; Haskin, Larry A.; Crane, Daniel
1990-01-01
Results are presented on determinations of reduction potentials and their temperature dependence of selected ions in diopsidic melt, by using linear sweep voltammetry. Diffusion coefficients were measured for cations of Eu, Mn, Cr, and In. Enthalpies and entropies of reduction were determined for the cations V(V), Cr(3+), Mn(2+), Mn(3+), Fe(2+), Cu(2+), Mo(VI), Sn(IV), and Eu(3+). Reduction potentials were used to study the structural state of cations in the melt.
Implications of pressure diffusion for shock waves
NASA Technical Reports Server (NTRS)
Ram, Ram Bachan
1989-01-01
The report deals with the possible implications of pressure diffusion for shocks in one dimensional traveling waves in an ideal gas. From this new hypothesis all aspects of such shocks can be calculated except shock thickness. Unlike conventional shock theory, the concept of entropy is not needed or used. Our analysis shows that temperature rises near a shock, which is of course an experimental fact; however, it also predicts that very close to a shock, density increases faster than pressure. In other words, a shock itself is cold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yong, E-mail: 83229994@qq.com; Ge, Hao, E-mail: haoge@pku.edu.cn; Xiong, Jie, E-mail: jiexiong@umac.mo
Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.
Survey and analysis of multiresolution methods for turbulence data
Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...
2015-11-10
This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 512 3 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Re t = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisonsmore » between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less
Multiresolution saliency map based object segmentation
NASA Astrophysics Data System (ADS)
Yang, Jian; Wang, Xin; Dai, ZhenYou
2015-11-01
Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.
NASA Astrophysics Data System (ADS)
Jakse, N.; Pasturel, A.
2016-12-01
We perform ab initio molecular dynamics simulations to study structural and transport properties in liquid A l1 -xC ux alloys, with copper composition x ≤0.4 , in relation to the applicability of the Stokes-Einstein (SE) equation in these melts. To begin, we find that self-diffusion coefficients and viscosity are composition dependent, while their temperature dependence follows an Arrhenius-type behavior, except for x =0.4 at low temperature. Then, we find that the applicability of the SE equation is also composition dependent, and its breakdown in the liquid regime above the liquidus temperature can be related to different local ordering around each species. In this case, we emphasize the difficulty of extracting effective atomic radii from interatomic distances found in liquid phases, but we see a clear correlation between transport properties and local ordering described through the structural entropy approximated by the two-body contribution. We use these findings to reformulate the SE equation within the framework of Rosenfeld's scaling law in terms of partial structural entropies, and we demonstrate that the breakdown of the SE relation can be related to their temperature dependence. Finally, we also use this framework to derive a simple relation between the ratio of the self-diffusivities of the components and the ratio of their partial structural entropies.
NASA Astrophysics Data System (ADS)
Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.
2008-12-01
Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.
Liquid Aluminum: Atomic diffusion and viscosity from ab initio molecular dynamics
Jakse, Noel; Pasturel, Alain
2013-01-01
We present a study of dynamic properties of liquid aluminum using density-functional theory within the local-density (LDA) and generalized gradient (GGA) approximations. We determine the temperature dependence of the self-diffusion coefficient as well the viscosity using direct methods. Comparisons with experimental data favor the LDA approximation to compute dynamic properties of liquid aluminum. We show that the GGA approximation induce more important backscattering effects due to an enhancement of the icosahedral short range order (ISRO) that impact directly dynamic properties like the self-diffusion coefficient. All these results are then used to test the Stokes-Einstein relation and the universal scaling law relating the diffusion coefficient and the excess entropy of a liquid. PMID:24190311
Wang, Kun-Ching
2015-01-14
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.
First-principles calculation of entropy for liquid metals.
Desjarlais, Michael P
2013-12-01
We demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two-phase thermodynamic (2PT) model based on a decomposition of the velocity autocorrelation function into gas-like (hard sphere) and solid-like (harmonic) subsystems. The hard sphere model for the gas-like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical Lorentzian high-frequency tail. Using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low-frequency (long-time) behavior of the hard sphere model while providing for realistic short-time coherence and high-frequency tails to the spectrum. This approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. The use of this method for the determination of melt boundaries is demonstrated with a calculation of the high-pressure bcc melt boundary for sodium. With the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2PT model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science.
First-principles calculation of entropy for liquid metals
NASA Astrophysics Data System (ADS)
Desjarlais, Michael P.
2013-12-01
We demonstrate the accurate calculation of entropies and free energies for a variety of liquid metals using an extension of the two-phase thermodynamic (2PT) model based on a decomposition of the velocity autocorrelation function into gas-like (hard sphere) and solid-like (harmonic) subsystems. The hard sphere model for the gas-like component is shown to give systematically high entropies for liquid metals as a direct result of the unphysical Lorentzian high-frequency tail. Using a memory function framework we derive a generally applicable velocity autocorrelation and frequency spectrum for the diffusive component which recovers the low-frequency (long-time) behavior of the hard sphere model while providing for realistic short-time coherence and high-frequency tails to the spectrum. This approach provides a significant increase in the accuracy of the calculated entropies for liquid metals and is compared to ambient pressure data for liquid sodium, aluminum, gallium, tin, and iron. The use of this method for the determination of melt boundaries is demonstrated with a calculation of the high-pressure bcc melt boundary for sodium. With the significantly improved accuracy available with the memory function treatment for softer interatomic potentials, the 2PT model for entropy calculations should find broader application in high energy density science, warm dense matter, planetary science, geophysics, and material science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baig, Mohammad Saad, E-mail: saad110baig@gmail.com; Chakraborty, Brahmananda; Ramaniah, Lavanya M.
NaF-ZrF{sub 4} is used as a waste incinerator and as a coolant in Generation IV reactors.Structural and dynamical properties of molten NaF-ZrF{sub 4} system were studied along with Onsagercoefficients and Maxwell–Stefan (MS) Diffusivities applying Green–Kubo formalism and molecular dynamics (MD) simulations. The zirconium ions are found to be 8 fold coordinated with fluoride ions for all temperatures and concentrations. All the diffusive flux correlations show back-scattering. Even though the MS diffusivities are expected to depend very lightly on the composition because of decoupling of thermodynamic factor, the diffusivity Đ{sub Na-F} shows interesting behavior with the increase in concentration of ZrF{submore » 4}. This is because of network formation in NaF-ZrF{sub 4}. Positive entropy constraints have been plotted to authenticate negative diffusivities observed.« less
The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...
Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.
Cowlagi, Raghvendra V; Tsiotras, Panagiotis
2012-10-01
We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.
Coupling diffusion and maximum entropy models to estimate thermal inertia
USDA-ARS?s Scientific Manuscript database
Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...
Automatic brain tumor detection in MRI: methodology and statistical validation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Islam, Mohammad A.; Shaik, Jahangheer; Parra, Carlos; Ogg, Robert
2005-04-01
Automated brain tumor segmentation and detection are immensely important in medical diagnostics because it provides information associated to anatomical structures as well as potential abnormal tissue necessary to delineate appropriate surgical planning. In this work, we propose a novel automated brain tumor segmentation technique based on multiresolution texture information that combines fractal Brownian motion (fBm) and wavelet multiresolution analysis. Our wavelet-fractal technique combines the excellent multiresolution localization property of wavelets to texture extraction of fractal. We prove the efficacy of our technique by successfully segmenting pediatric brain MR images (MRIs) from St. Jude Children"s Research Hospital. We use self-organizing map (SOM) as our clustering tool wherein we exploit both pixel intensity and multiresolution texture features to obtain segmented tumor. Our test results show that our technique successfully segments abnormal brain tissues in a set of T1 images. In the next step, we design a classifier using Feed-Forward (FF) neural network to statistically validate the presence of tumor in MRI using both the multiresolution texture and the pixel intensity features. We estimate the corresponding receiver operating curve (ROC) based on the findings of true positive fractions and false positive fractions estimated from our classifier at different threshold values. An ROC, which can be considered as a gold standard to prove the competence of a classifier, is obtained to ascertain the sensitivity and specificity of our classifier. We observe that at threshold 0.4 we achieve true positive value of 1.0 (100%) sacrificing only 0.16 (16%) false positive value for the set of 50 T1 MRI analyzed in this experiment.
Wang, Kun-Ching
2015-01-01
The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590
Diffuse-interface model for rapid phase transformations in nonequilibrium systems.
Galenko, Peter; Jou, David
2005-04-01
A thermodynamic approach to rapid phase transformations within a diffuse interface in a binary system is developed. Assuming an extended set of independent thermodynamic variables formed by the union of the classic set of slow variables and the space of fast variables, we introduce finiteness of the heat and solute diffusive propagation at the finite speed of the interface advancing. To describe transformations within the diffuse interface, we use the phase-field model which allows us to follow steep but smooth changes of phase within the width of the diffuse interface. Governing equations of the phase-field model are derived for the hyperbolic model, a model with memory, and a model of nonlinear evolution of transformation within the diffuse interface. The consistency of the model is proved by the verification of the validity of the condition of positive entropy production and by outcomes of the fluctuation-dissipation theorem. A comparison with existing sharp-interface and diffuse-interface versions of the model is given.
The Entropy of Non-Ergodic Complex Systems — a Derivation from First Principles
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Hanel, Rudolf
In information theory the 4 Shannon-Khinchin1,2 (SK) axioms determine Boltzmann Gibbs entropy, S -∑i pilog pi, as the unique entropy. Physics is different from information in the sense that physical systems can be non-ergodic or non-Markovian. To characterize such strongly interacting, statistical systems - complex systems in particular - within a thermodynamical framework it might be necessary to introduce generalized entropies. A series of such entropies have been proposed in the past decades. Until now the understanding of their fundamental origin and their deeper relations to complex systems remains unclear. To clarify the situation we note that non-ergodicity explicitly violates the fourth SK axiom. We show that by relaxing this axiom the entropy generalizes to, S ∑i Γ(d + 1, 1 - c log pi), where Γ is the incomplete Gamma function, and c and d are scaling exponents. All recently proposed entropies compatible with the first 3 SK axioms appear to be special cases. We prove that each statistical system is uniquely characterized by the pair of the two scaling exponents (c, d), which defines equivalence classes for all systems. The corresponding distribution functions are special forms of Lambert-W exponentials containing, as special cases, Boltzmann, stretched exponential and Tsallis distributions (power-laws) - all widely abundant in nature. This derivation is the first ab initio justification for generalized entropies. We next show how the phasespace volume of a system is related to its generalized entropy, and provide a concise criterion when it is not of Boltzmann-Gibbs type but assumes a generalized form. We show that generalized entropies only become relevant when the dynamically (statistically) relevant fraction of degrees of freedom in a system vanishes in the thermodynamic limit. These are systems where the bulk of the degrees of freedom is frozen. Systems governed by generalized entropies are therefore systems whose phasespace volume effectively collapses to a lower-dimensional 'surface'. We explicitly illustrate the situation for accelerating random walks, and a spin system on a constant-conectancy network. We argue that generalized entropies should be relevant for self-organized critical systems such as sand piles, for spin systems which form meta-structures such as vortices, domains, instantons, etc., and for problems associated with anomalous diffusion.
NASA Astrophysics Data System (ADS)
Tanaka, Masayuki; Cardoso, Rui; Bahai, Hamid
2018-04-01
In this work, the Moving Particle Semi-implicit (MPS) method is enhanced for multi-resolution problems with different resolutions at different parts of the domain utilising a particle splitting algorithm for the finer resolution and a particle merging algorithm for the coarser resolution. The Least Square MPS (LSMPS) method is used for higher stability and accuracy. Novel boundary conditions are developed for the treatment of wall and pressure boundaries for the Multi-Resolution LSMPS method. A wall is represented by polygons for effective simulations of fluid flows with complex wall geometries and the pressure boundary condition allows arbitrary inflow and outflow, making the method easier to be used in flow simulations of channel flows. By conducting simulations of channel flows and free surface flows, the accuracy of the proposed method was verified.
Multiresolution and Explicit Methods for Vector Field Analysis and Visualization
NASA Technical Reports Server (NTRS)
Nielson, Gregory M.
1997-01-01
This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.
Spider-web inspired multi-resolution graphene tactile sensor.
Liu, Lu; Huang, Yu; Li, Fengyu; Ma, Ying; Li, Wenbo; Su, Meng; Qian, Xin; Ren, Wanjie; Tang, Kanglai; Song, Yanlin
2018-05-08
Multi-dimensional accurate response and smooth signal transmission are critical challenges in the advancement of multi-resolution recognition and complex environment analysis. Inspired by the structure-activity relationship between discrepant microstructures of the spiral and radial threads in a spider web, we designed and printed graphene with porous and densely-packed microstructures to integrate into a multi-resolution graphene tactile sensor. The three-dimensional (3D) porous graphene structure performs multi-dimensional deformation responses. The laminar densely-packed graphene structure contributes excellent conductivity with flexible stability. The spider-web inspired printed pattern inherits orientational and locational kinesis tracking. The multi-structure construction with homo-graphene material can integrate discrepant electronic properties with remarkable flexibility, which will attract enormous attention for electronic skin, wearable devices and human-machine interactions.
NASA Astrophysics Data System (ADS)
M. C. Sagis, Leonard
2001-03-01
In this paper, we develop a theory for the calculation of the surface diffusion coefficient for an arbitrarily curved fluid-fluid interface. The theory is valid for systems in hydrodynamic equilibrium, with zero mass-averaged velocities in the bulk and interfacial regions. We restrict our attention to systems with isotropic bulk phases, and an interfacial region that is isotropic in the plane parallel to the dividing surface. The dividing surface is assumed to be a simple interface, without memory effects or yield stresses. We derive an expression for the surface diffusion coefficient in terms of two parameters of the interfacial region: the coefficient for plane-parallel diffusion D (AB)aa(ξ) , and the driving force d(B)I||(ξ) . This driving force is the parallel component of the driving force for diffusion in the interfacial region. We derive an expression for this driving force using the entropy balance.
Spatiotemporal chaos in the dynamics of buoyantly and diffusively unstable chemical fronts
NASA Astrophysics Data System (ADS)
Baroni, M. P. M. A.; Guéron, E.; De Wit, A.
2012-03-01
Nonlinear dynamics resulting from the interplay between diffusive and buoyancy-driven Rayleigh-Taylor (RT) instabilities of autocatalytic traveling fronts are analyzed numerically for various values of the relevant parameters. These are the Rayleigh numbers of the reactant A and autocatalytic product B solutions as well as the ratio D =DB/DA between the diffusion coefficients of the two key chemical species. The interplay between the coarsening dynamics characteristic of the RT instability and the constant short wavelength modulation of the diffusive instability can lead in some regimes to complex dynamics dominated by irregular succession of birth and death of fingers. By using spectral entropy measurements, we characterize the transition between order and spatial disorder in this system. The analysis of the power spectrum and autocorrelation function, moreover, identifies similarities between the various spatial patterns. The contribution of the diffusive instability to the complex dynamics is discussed.
On Entropy Generation and the Effect of Heat and Mass Transfer Coupling in a Distillation Process
NASA Astrophysics Data System (ADS)
Burgos-Madrigal, Paulina; Mendoza, Diego F.; López de Haro, Mariano
2018-01-01
The entropy production rates as obtained from the exergy analysis, entropy balance and the nonequilibrium thermodynamics approach are compared for two distillation columns. The first case is a depropanizer column involving a mixture of ethane, propane, n-butane and n-pentane. The other is a weighed sample of Mexican crude oil distilled with a pilot scale fractionating column. The composition, temperature and flow profiles, for a given duty and operating conditions in each column, are obtained with the Aspen Plus V8.4 software by using the RateFrac model with a rate-based nonequilibrium column. For the depropanizer column the highest entropy production rate is found in the central trays where most of the mass transfer occurs, while in the second column the highest values correspond to the first three stages (where the vapor mixture is in contact with the cold liquid reflux), and to the last three stages (where the highest temperatures take place). The importance of the explicit inclusion of thermal diffusion in these processes is evaluated. In the depropanizer column, the effect of the coupling between heat and mass transfer is found to be negligible, while for the fractionating column it becomes appreciable.
Lin, Shiang-Tai; Maiti, Prabal K; Goddard, William A
2010-06-24
Presented here is the two-phase thermodynamic (2PT) model for the calculation of energy and entropy of molecular fluids from the trajectory of molecular dynamics (MD) simulations. In this method, the density of state (DoS) functions (including the normal modes of translation, rotation, and intramolecular vibration motions) are determined from the Fourier transform of the corresponding velocity autocorrelation functions. A fluidicity parameter (f), extracted from the thermodynamic state of the system derived from the same MD, is used to partition the translation and rotation modes into a diffusive, gas-like component (with 3Nf degrees of freedom) and a nondiffusive, solid-like component. The thermodynamic properties, including the absolute value of entropy, are then obtained by applying quantum statistics to the solid component and applying hard sphere/rigid rotor thermodynamics to the gas component. The 2PT method produces exact thermodynamic properties of the system in two limiting states: the nondiffusive solid state (where the fluidicity is zero) and the ideal gas state (where the fluidicity becomes unity). We examine the 2PT entropy for various water models (F3C, SPC, SPC/E, TIP3P, and TIP4P-Ew) at ambient conditions and find good agreement with literature results obtained based on other simulation techniques. We also validate the entropy of water in the liquid and vapor phases along the vapor-liquid equilibrium curve from the triple point to the critical point. We show that this method produces converged liquid phase entropy in tens of picoseconds, making it an efficient means for extracting thermodynamic properties from MD simulations.
NASA Astrophysics Data System (ADS)
Choi, Jae Young; Kim, Dae Hoe; Choi, Seon Hyeong; Ro, Yong Man
2012-03-01
We investigated the feasibility of using multiresolution Local Binary Pattern (LBP) texture analysis to reduce falsepositive (FP) detection in a computerized mass detection framework. A new and novel approach for extracting LBP features is devised to differentiate masses and normal breast tissue on mammograms. In particular, to characterize the LBP texture patterns of the boundaries of masses, as well as to preserve the spatial structure pattern of the masses, two individual LBP texture patterns are then extracted from the core region and the ribbon region of pixels of the respective ROI regions, respectively. These two texture patterns are combined to produce the so-called multiresolution LBP feature of a given ROI. The proposed LBP texture analysis of the information in mass core region and its margin has clearly proven to be significant and is not sensitive to the precise location of the boundaries of masses. In this study, 89 mammograms were collected from the public MAIS database (DB). To perform a more realistic assessment of FP reduction process, the LBP texture analysis was applied directly to a total of 1,693 regions of interest (ROIs) automatically segmented by computer algorithm. Support Vector Machine (SVM) was applied for the classification of mass ROIs from ROIs containing normal tissue. Receiver Operating Characteristic (ROC) analysis was conducted to evaluate the classification accuracy and its improvement using multiresolution LBP features. With multiresolution LBP features, the classifier achieved an average area under the ROC curve, , z A of 0.956 during testing. In addition, the proposed LBP features outperform other state-of-the-arts features designed for false positive reduction.
NASA Astrophysics Data System (ADS)
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-01
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik
2018-05-10
Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.
On Thermodiffusion and Gauge Transformations for Thermodynamic Fluxes and Driving Forces
NASA Astrophysics Data System (ADS)
Goldobin, D. S.
2017-12-01
We discuss the molecular diffusion transport in infinitely dilute liquid solutions under nonisothermal conditions. This discussion is motivated by an occurring misinterpretation of thermodynamic transport equations written in terms of chemical potential in the presence of temperature gradient. The transport equations contain the contributions owned by a gauge transformation related to the fact that chemical potential is determined up to the summand of form ( AT + B) with arbitrary constants A and B, where constant A is owned by the entropy invariance with respect to shifts by a constant value and B is owned by the potential energy invariance with respect to shifts by a constant value. The coefficients of the cross-effect terms in thermodynamic fluxes are contributed by this gauge transformation and, generally, are not the actual cross-effect physical transport coefficients. Our treatment is based on consideration of the entropy balance and suggests a promising hint for attempts of evaluation of the thermal diffusion constant from the first principles. We also discuss the impossibility of the "barodiffusion" for dilute solutions, understood in a sense of diffusion flux driven by the pressure gradient itself. When one speaks of "barodiffusion" terms in literature, these terms typically represent the drift in external potential force field (e.g., electric or gravitational fields), where in the final equations the specific force on molecules is substituted with an expression with the hydrostatic pressure gradient this external force field produces. Obviously, the interpretation of the latter as barodiffusion is fragile and may hinder the accounting for the diffusion fluxes produced by the pressure gradient itself.
NASA Astrophysics Data System (ADS)
Nadi, Fatemeh; Tzempelikos, Dimitrios
2018-01-01
In this work, apples of cv. Golden Delicious were cut into slices that were 5 and 7 mm thick and then vacuum dried at 50, 60 and 70 °C and pressure of 0.02 bar. The thin layer model drying kinetics was studied, and mass transfer properties, specifically effective moisture diffusivity and convective mass transfer coefficient, were evaluated using the Fick's equation of diffusion. Also, thermodynamic parameters of the process, i.e. enthalpy (ΔH), entropy (ΔS) and Gibbs free energy (ΔG), were determined. Colour properties were evaluated as one of the important indicators of food quality and marketability. Determination of mass transfer parameters and thermodynamic properties of vacuum dried apple slices has not been discussed much in the literature. In conclusion, the Nadi's model fitted best the observed data that represent the drying process. Thermodynamic properties were determined based on the dependence of the drying constant of the Henderson and Pabis model on temperature, and it was concluded that the variation in drying kinetics depends on the energy contribution of the surrounding environment. The enthalpy and entropy diminished, while the Gibbs free energy increased with the increase of the temperature of drying; therefore, it was possible to verify that variation in the diffusion process in the apple during drying depends on energetic contributions of the environment. The obtained results showed that diffusivity increased for 69%, while the mass transfer coefficient increase was even higher, 75%, at the variation of temperature of 20 °C. The increase in the dimensionless Biot number was 20%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masset, F. S.; Casoli, J., E-mail: masset@fis.unam.m, E-mail: jules.casoli@cea.f, E-mail: masset@fis.unam.m
2010-11-10
We provide torque formulae for low-mass planets undergoing type I migration in gaseous disks. These torque formulae put special emphasis on the horseshoe drag, which is prone to saturation: the asymptotic value reached by the horseshoe drag depends on a balance between coorbital dynamics (which tends to cancel out or saturate the torque) and diffusive processes (which tend to restore the unperturbed disk profiles, thereby desaturating the torque). We entertain the question of this asymptotic value and derive torque formulae that give the total torque as a function of the disk's viscosity and thermal diffusivity. The horseshoe drag features twomore » components: one that scales with the vortensity gradient and another that scales with the entropy gradient and constitutes the most promising candidate for halting inward type I migration. Our analysis, which is complemented by numerical simulations, recovers characteristics already noted by numericists, namely, that the viscous timescale across the horseshoe region must be shorter than the libration time in order to avoid saturation and that, provided this condition is satisfied, the entropy-related part of the horseshoe drag remains large if the thermal timescale is shorter than the libration time. Side results include a study of the Lindblad torque as a function of thermal diffusivity and a contribution to the corotation torque arising from vortensity viscously created at the contact discontinuities that appear at the horseshoe separatrices. For the convenience of the reader mostly interested in the torque formulae, Section 8 is self-contained.« less
NASA Astrophysics Data System (ADS)
Nadi, Fatemeh; Tzempelikos, Dimitrios
2018-07-01
In this work, apples of cv. Golden Delicious were cut into slices that were 5 and 7 mm thick and then vacuum dried at 50, 60 and 70 °C and pressure of 0.02 bar. The thin layer model drying kinetics was studied, and mass transfer properties, specifically effective moisture diffusivity and convective mass transfer coefficient, were evaluated using the Fick's equation of diffusion. Also, thermodynamic parameters of the process, i.e. enthalpy ( ΔH), entropy ( ΔS) and Gibbs free energy ( ΔG), were determined. Colour properties were evaluated as one of the important indicators of food quality and marketability. Determination of mass transfer parameters and thermodynamic properties of vacuum dried apple slices has not been discussed much in the literature. In conclusion, the Nadi's model fitted best the observed data that represent the drying process. Thermodynamic properties were determined based on the dependence of the drying constant of the Henderson and Pabis model on temperature, and it was concluded that the variation in drying kinetics depends on the energy contribution of the surrounding environment. The enthalpy and entropy diminished, while the Gibbs free energy increased with the increase of the temperature of drying; therefore, it was possible to verify that variation in the diffusion process in the apple during drying depends on energetic contributions of the environment. The obtained results showed that diffusivity increased for 69%, while the mass transfer coefficient increase was even higher, 75%, at the variation of temperature of 20 °C. The increase in the dimensionless Biot number was 20%.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Larson-Miller Constant of Heat-Resistant Steel
NASA Astrophysics Data System (ADS)
Tamura, Manabu; Abe, Fujio; Shiba, Kiyoyuki; Sakasegawa, Hideo; Tanigawa, Hiroyasu
2013-06-01
Long-term rupture data for 79 types of heat-resistant steels including carbon steel, low-alloy steel, high-alloy steel, austenitic stainless steel, and superalloy were analyzed, and a constant for the Larson-Miller (LM) parameter was obtained in the current study for each material. The calculated LM constant, C, is approximately 20 for heat-resistant steels and alloys except for high-alloy martensitic steels with high creep resistance, for which C ≈ 30 . The apparent activation energy was also calculated, and the LM constant was found to be proportional to the apparent activation energy with a high correlation coefficient, which suggests that the LM constant is a material constant possessing intrinsic physical meaning. The contribution of the entropy change to the LM constant is not small, especially for several martensitic steels with large values of C. Deformation of such martensitic steels should accompany a large entropy change of 10 times the gas constant at least, besides the entropy change due to self-diffusion.
A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map
NASA Astrophysics Data System (ADS)
Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad
2016-06-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.
2012-10-24
representative pdf’s via the Kullback - Leibler divergence (KL). Species turnover, or b diversity, is estimated using both this KL divergence and the...multiresolution analysis provides a means for estimating divergence between two textures, specifically the Kullback - Leibler divergence between the pair of ...and open challenges. Ecological Informatics 5: 318–329. 19. Ludovisi A, TaticchiM(2006) Investigating beta diversity by kullback - leibler information
Xu, Wei; Cao, Maosen; Ding, Keqin; Radzieński, Maciej; Ostachowicz, Wiesław
2017-01-01
Carbon fiber reinforced polymer laminates are increasingly used in the aerospace and civil engineering fields. Identifying cracks in carbon fiber reinforced polymer laminated beam components is of considerable significance for ensuring the integrity and safety of the whole structures. With the development of high-resolution measurement technologies, mode-shape-based crack identification in such laminated beam components has become an active research focus. Despite its sensitivity to cracks, however, this method is susceptible to noise. To address this deficiency, this study proposes a new concept of multi-resolution modal Teager–Kaiser energy, which is the Teager–Kaiser energy of a mode shape represented in multi-resolution, for identifying cracks in carbon fiber reinforced polymer laminated beams. The efficacy of this concept is analytically demonstrated by identifying cracks in Timoshenko beams with general boundary conditions; and its applicability is validated by diagnosing cracks in a carbon fiber reinforced polymer laminated beam, whose mode shapes are precisely acquired via non-contact measurement using a scanning laser vibrometer. The analytical and experimental results show that multi-resolution modal Teager–Kaiser energy is capable of designating the presence and location of cracks in these beams under noisy environments. This proposed method holds promise for developing crack identification systems for carbon fiber reinforced polymer laminates. PMID:28773016
Multiresolution persistent homology for excessively large biomolecular datasets
NASA Astrophysics Data System (ADS)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei
2015-10-01
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.
Multi-resolution extension for transmission of geodata in a mobile context
NASA Astrophysics Data System (ADS)
Follin, Jean-Michel; Bouju, Alain; Bertrand, Frédéric; Boursier, Patrice
2005-03-01
A solution is proposed for the management of multi-resolution vector data in a mobile spatial information visualization system. The client-server architecture and the models of data and transfer of the system are presented first. The aim of this system is to reduce data exchanged between client and server by reusing data already present on the client side. Then, an extension of this system to multi-resolution data is proposed. Our solution is based on the use of increments in a multi-scale database. A database architecture where data sets for different predefined scales are precomputed and stored on the server side is adopted. In this model, each object representing the same real world entities at different levels of detail has to be linked beforehand. Increments correspond to the difference between two datasets with different levels of detail. They are transmitted in order to increase (or decrease) the detail to the client upon request. They include generalization and refinement operators allowing transitions between the different levels. Finally, a framework suited to the transfer of multi-resolution data in a mobile context is presented. This allows reuse of data locally available at different levels of detail and, in this way, reduces the amount of data transferred between client and server.
Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study
NASA Astrophysics Data System (ADS)
Bayraktar, Bulent; Analoui, Mostafa
2004-05-01
Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
NASA Astrophysics Data System (ADS)
Bringuier, E.
2009-11-01
The paper analyses particle diffusion from a thermodynamic standpoint. The main goal of the paper is to highlight the conceptual connection between particle diffusion, which belongs to non-equilibrium statistical physics, and mechanics, which deals with particle motion, at the level of third-year university courses. We start out from the fact that, near equilibrium, particle transport should occur down the gradient of the chemical potential. This yields Fick's law with two additional advantages. First, splitting the chemical potential into 'mechanical' and 'chemical' contributions shows how transport and mechanics are linked through the diffusivity-mobility relationship. Second, splitting the chemical potential into entropic and energetic contributions discloses the respective roles of entropy maximization and energy minimization in driving diffusion. The paper addresses first unary diffusion, where there is only one mobile species in an immobile medium, and next turns to binary diffusion, where two species are mobile with respect to each other in a fluid medium. The interrelationship between unary and binary diffusivities is brought out and it is shown how binary diffusion reduces to unary diffusion in the limit of high dilution of one species amidst the other one. Self- and mutual diffusion are considered and contrasted within the thermodynamic framework; self-diffusion is a time-dependent manifestation of the Gibbs paradox of mixing.
Poveda, Ferran; Gil, Debora; Martí, Enric; Andaluz, Albert; Ballester, Manel; Carreras, Francesc
2013-10-01
Deeper understanding of the myocardial structure linking the morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen this knowledge through advanced computer graphical representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging. We performed automatic tractography reconstruction of unsegmented diffusion tensor magnetic resonance imaging datasets of canine heart from the public database of the Johns Hopkins University. Full-scale tractographies have been built with 200 seeds and are composed by streamlines computed on the vector field of primary eigenvectors at the diffusion tensor volumes. We also introduced a novel multiscale visualization technique in order to obtain a simplified tractography. This methodology retains the main geometric features of the fiber tracts, making it easier to decipher the main properties of the architectural organization of the heart. Output analysis of our tractographic representations showed exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array. Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3-dimensional levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by F. Torrent-Guasp. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
Self-diffusion in MgO--a density functional study.
Runevall, Odd; Sandberg, Nils
2011-08-31
Density functional theory calculations have been performed to study self-diffusion in magnesium oxide, a model material for a wide range of ionic compounds. Formation energies and entropies of Schottky defects and divacancies were obtained by means of total energy and phonon calculations in supercell configurations. Transition state theory was used to estimate defect migration rates, with migration energies taken from static calculations, and the corresponding frequency factors estimated from the phonon spectrum. In all static calculations we corrected for image effects using either a multipole expansion or an extrapolation to the low concentration limit. It is shown that both methods give similar results. The results for self-diffusion of Mg and O confirm the previously established picture, namely that in materials of nominal purity, Mg diffuses extrinsically by a single vacancy mechanism, while O diffuses intrinsically by a divacancy mechanism. Quantitatively, the current results are in very good agreement with experiments concerning O diffusion, while for Mg the absolute diffusion rate is generally underestimated by a factor of 5-10. The reason for this discrepancy is discussed.
Second law of thermodynamics in volume diffusion hydrodynamics in multicomponent gas mixtures
NASA Astrophysics Data System (ADS)
Dadzie, S. Kokou
2012-10-01
We presented the thermodynamic structure of a new continuum flow model for multicomponent gas mixtures. The continuum model is based on a volume diffusion concept involving specific species. It is independent of the observer's reference frame and enables a straightforward tracking of a selected species within a mixture composed of a large number of constituents. A method to derive the second law and constitutive equations accompanying the model is presented. Using the configuration of a rotating fluid we illustrated an example of non-classical flow physics predicted by new contributions in the entropy and constitutive equations.
James Wickham; Collin Homer; James Vogelmann; Alexa McKerrow; Rick Mueler; Nate Herold; John Coulston
2014-01-01
The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agenciesâ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the...
An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.
2016-06-01
Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.
Multiresolution analysis of Bursa Malaysia KLCI time series
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed
2017-05-01
In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
Frenetic Bounds on the Entropy Production
NASA Astrophysics Data System (ADS)
Maes, Christian
2017-10-01
We give a systematic derivation of positive lower bounds for the expected entropy production (EP) rate in classical statistical mechanical systems obeying a dynamical large deviation principle. The logic is the same for the return to thermodynamic equilibrium as it is for steady nonequilibria working under the condition of local detailed balance. We recover there recently studied "uncertainty" relations for the EP, appearing in studies about the effectiveness of mesoscopic machines. In general our refinement of the positivity of the expected EP rate is obtained in terms of a positive and even function of the expected current(s) which measures the dynamical activity in the system, a time-symmetric estimate of the changes in the system's configuration. Also underdamped diffusions can be included in the analysis.
COOL CORE CLUSTERS FROM COSMOLOGICAL SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasia, E.; Borgani, S.; Murante, G.
2015-11-01
We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and themore » iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.« less
Cool Core Clusters from Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rasia, E.; Borgani, S.; Murante, G.; Planelles, S.; Beck, A. M.; Biffi, V.; Ragone-Figueroa, C.; Granato, G. L.; Steinborn, L. K.; Dolag, K.
2015-11-01
We present results obtained from a set of cosmological hydrodynamic simulations of galaxy clusters, aimed at comparing predictions with observational data on the diversity between cool-core (CC) and non-cool-core (NCC) clusters. Our simulations include the effects of stellar and active galactic nucleus (AGN) feedback and are based on an improved version of the smoothed particle hydrodynamics code GADGET-3, which ameliorates gas mixing and better captures gas-dynamical instabilities by including a suitable artificial thermal diffusion. In this Letter, we focus our analysis on the entropy profiles, the primary diagnostic we used to classify the degree of cool-coreness of clusters, and the iron profiles. In keeping with observations, our simulated clusters display a variety of behaviors in entropy profiles: they range from steadily decreasing profiles at small radii, characteristic of CC systems, to nearly flat core isentropic profiles, characteristic of NCC systems. Using observational criteria to distinguish between the two classes of objects, we find that they occur in similar proportions in both simulations and observations. Furthermore, we also find that simulated CC clusters have profiles of iron abundance that are steeper than those of NCC clusters, which is also in agreement with observational results. We show that the capability of our simulations to generate a realistic CC structure in the cluster population is due to AGN feedback and artificial thermal diffusion: their combined action allows us to naturally distribute the energy extracted from super-massive black holes and to compensate for the radiative losses of low-entropy gas with short cooling time residing in the cluster core.
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].
Nketiah, Gabriel; Elschot, Mattijs; Kim, Eugene; Teruel, Jose R; Scheenen, Tom W; Bathen, Tone F; Selnæs, Kirsten M
2017-07-01
To evaluate the diagnostic relevance of T2-weighted (T2W) MRI-derived textural features relative to quantitative physiological parameters derived from diffusion-weighted (DW) and dynamic contrast-enhanced (DCE) MRI in Gleason score (GS) 3+4 and 4+3 prostate cancers. 3T multiparametric-MRI was performed on 23 prostate cancer patients prior to prostatectomy. Textural features [angular second moment (ASM), contrast, correlation, entropy], apparent diffusion coefficient (ADC), and DCE pharmacokinetic parameters (K trans and V e ) were calculated from index tumours delineated on the T2W, DW, and DCE images, respectively. The association between the textural features and prostatectomy GS and the MRI-derived parameters, and the utility of the parameters in differentiating between GS 3+4 and 4+3 prostate cancers were assessed statistically. ASM and entropy correlated significantly (p < 0.05) with both GS and median ADC. Contrast correlated moderately with median ADC. The textural features correlated insignificantly with K trans and V e . GS 4+3 cancers had significantly lower ASM and higher entropy than 3+4 cancers, but insignificant differences in median ADC, K trans , and V e . The combined texture-MRI parameters yielded higher classification accuracy (91%) than the individual parameter sets. T2W MRI-derived textural features could serve as potential diagnostic markers, sensitive to the pathological differences in prostate cancers. • T2W MRI-derived textural features correlate significantly with Gleason score and ADC. • T2W MRI-derived textural features differentiate Gleason score 3+4 from 4+3 cancers. • T2W image textural features could augment tumour characterization.
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-01-01
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520
2002-01-01
their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested
Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.
2016-06-01
The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).
Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H
2012-12-01
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Wood, W.T.; Hart, P.E.; Hutchinson, D.R.; Dutta, N.; Snyder, F.; Coffin, R.B.; Gettrust, J.F.
2008-01-01
To determine the impact of seeps and focused flow on the occurrence of shallow gas hydrates, several seafloor mounds in the Atwater Valley lease area of the Gulf of Mexico were surveyed with a wide range of seismic frequencies. Seismic data were acquired with a deep-towed, Helmholz resonator source (220-820 Hz); a high-resolution, Generator-Injector air-gun (30-300 Hz); and an industrial air-gun array (10-130 Hz). Each showed a significantly different response in this weakly reflective, highly faulted area. Seismic modeling and observations of reversed-polarity reflections and small scale diffractions are consistent with a model of methane transport dominated regionally by diffusion but punctuated by intense upward advection responsible for the bathymetric mounds, as well as likely advection along pervasive filamentous fractures away from the mounds.
Investigating Diffusion and Entropy with Carbon Dioxide-Filled Balloons
ERIC Educational Resources Information Center
Jadrich, James; Bruxvoort, Crystal
2010-01-01
Fill an ordinary latex balloon with helium gas and you know what to expect. Over the next day or two the volume will decrease noticeably as helium escapes from the balloon. So what happens when a latex balloon is filled with carbon dioxide gas? Surprisingly, carbon dioxide balloons deflate at rates as much as an order of magnitude faster than…
Nonlinear Analysis of Surface EMG Time Series of Back Muscles
NASA Astrophysics Data System (ADS)
Dolton, Donald C.; Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-10-01
A nonlinear analysis of surface electromyography time series of subjects with and without low back pain is presented. The mean-square displacement and entropy shows anomalous diffusive behavior on intermediate time range 10 ms < t < 1 s. This behavior implies the presence of correlations in the signal. We discuss the shape of the power spectrum of the signal.
High Interfacial Barriers at Narrow Carbon Nanotube-Water Interfaces.
Varanasi, Srinivasa Rao; Subramanian, Yashonath; Bhatia, Suresh K
2018-06-26
Water displays anomalous fast diffusion in narrow carbon nanotubes (CNTs), a behavior that has been reproduced in both experimental and simulation studies. However, little is reported on the effect of bulk water-CNT interfaces, which is critical to exploiting the fast transport of water across narrow carbon nanotubes in actual applications. Using molecular dynamics simulations, we investigate here the effect of such interfaces on the transport of water across arm-chair CNTs of different diameters. Our results demonstrate that diffusion of water is significantly retarded in narrow CNTs due to bulk regions near the pore entrance. The slowdown of dynamics can be attributed to the presence of large energy barriers at bulk water-CNT interfaces. The presence of such intense barriers at the bulk-CNT interface arises due to the entropy contrast between the bulk and confined regions, with water molecules undergoing high translational and rotational entropy gain on entering from the bulk to the CNT interior. The intensity of such energy barriers decreases with increase in CNT diameter. These results are very important for emerging technological applications of CNTs and other nanoscale materials, such as in nanofluidics, water purification, nanofiltration, and desalination, as well as for biological transport processes.
NASA Astrophysics Data System (ADS)
Choi, Won-Mi; Jo, Yong Hee; Sohn, Seok Su; Lee, Sunghak; Lee, Byeong-Joo
2018-01-01
Although high-entropy alloys (HEAs) are attracting interest, the physical metallurgical mechanisms related to their properties have mostly not been clarified, and this limits wider industrial applications, in addition to the high alloy costs. We clarify the physical metallurgical reasons for the materials phenomena (sluggish diffusion and micro-twining at cryogenic temperatures) and investigate the effect of individual elements on solid solution hardening for the equiatomic CoCrFeMnNi HEA based on atomistic simulations (Monte Carlo, molecular dynamics and molecular statics). A significant number of stable vacant lattice sites with high migration energy barriers exists and is thought to cause the sluggish diffusion. We predict that the hexagonal close-packed (hcp) structure is more stable than the face-centered cubic (fcc) structure at 0 K, which we propose as the fundamental reason for the micro-twinning at cryogenic temperatures. The alloying effect on the critical resolved shear stress (CRSS) is well predicted by the atomistic simulation, used for a design of non-equiatomic fcc HEAs with improved strength, and is experimentally verified. This study demonstrates the applicability of the proposed atomistic approach combined with a thermodynamic calculation technique to a computational design of advanced HEAs.
Chaotic dynamics of large-scale double-diffusive convection in a porous medium
NASA Astrophysics Data System (ADS)
Kondo, Shutaro; Gotoda, Hiroshi; Miyano, Takaya; Tokuda, Isao T.
2018-02-01
We have studied chaotic dynamics of large-scale double-diffusive convection of a viscoelastic fluid in a porous medium from the viewpoint of dynamical systems theory. A fifth-order nonlinear dynamical system modeling the double-diffusive convection is theoretically obtained by incorporating the Darcy-Brinkman equation into transport equations through a physical dimensionless parameter representing porosity. We clearly show that the chaotic convective motion becomes much more complicated with increasing porosity. The degree of dynamic instability during chaotic convective motion is quantified by two important measures: the network entropy of the degree distribution in the horizontal visibility graph and the Kaplan-Yorke dimension in terms of Lyapunov exponents. We also present an interesting on-off intermittent phenomenon in the probability distribution of time intervals exhibiting nearly complete synchronization.
Fast Diffusion to Self-Similarity: Complete Spectrum, Long-Time Asymptotics, and Numerology
NASA Astrophysics Data System (ADS)
Denzler, Jochen; McCann, Robert J.
2005-03-01
The complete spectrum is determined for the operator on the Sobolev space W1,2ρ(Rn) formed by closing the smooth functions of compact support with respect to the norm Here the Barenblatt profile ρ is the stationary attractor of the rescaled diffusion equation in the fast, supercritical regime m the same diffusion dynamics represent the steepest descent down an entropy E(u) on probability measures with respect to the Wasserstein distance d2. Formally, the operator H=HessρE is the Hessian of this entropy at its minimum ρ, so the spectral gap H≧α:=2-n(1-m) found below suggests the sharp rate of asymptotic convergence: from any centered initial data 0≦u(0,x) ∈ L1(Rn) with second moments. This bound improves various results in the literature, and suggests the conjecture that the self-similar solution u(t,x)=R(t)-nρ(x/R(t)) is always slowest to converge. The higher eigenfunctions which are polynomials with hypergeometric radial parts and the presence of continuous spectrum yield additional insight into the relations between symmetries of Rn and the flow. Thus the rate of convergence can be improved if we are willing to replace the distance to ρ with the distance to its nearest mass-preserving dilation (or still better, affine image). The strange numerology of the spectrum is explained in terms of the number of moments of ρ.
A multi-resolution approach for optimal mass transport
NASA Astrophysics Data System (ADS)
Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen
2007-09-01
Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.
The Impact of Entropy on the Spatial Organization of Synaptonemal Complexes within the Cell Nucleus
Fritsche, Miriam; Reinholdt, Laura G.; Lessard, Mark; Handel, Mary Ann; Bewersdorf, Jörg; Heermann, Dieter W.
2012-01-01
We employ 4Pi-microscopy to study SC organization in mouse spermatocyte nuclei allowing for the three-dimensional reconstruction of the SC's backbone arrangement. Additionally, we model the SCs in the cell nucleus by confined, self-avoiding polymers, whose chain ends are attached to the envelope of the confining cavity and diffuse along it. This work helps to elucidate the role of entropy in shaping pachytene SC organization. The framework provided by the complex interplay between SC polymer rigidity, tethering and confinement is able to qualitatively explain features of SC organization, such as mean squared end-to-end distances, mean squared center-of-mass distances, or SC density distributions. However, it fails in correctly assessing SC entanglement within the nucleus. In fact, our analysis of the 4Pi-microscopy images reveals a higher ordering of SCs within the nuclear volume than what is expected by our numerical model. This suggests that while effects of entropy impact SC organization, the dedicated action of proteins or actin cables is required to fine-tune the spatial ordering of SCs within the cell nucleus. PMID:22574147
Melting of Simple Solids and the Elementary Excitations of the Communal Entropy
NASA Astrophysics Data System (ADS)
Bongiorno, Angelo
2010-03-01
The melting phase transition of simple solids is addressed through the use of atomistic computer simulations. Three transition metals (Ni, Au, and Pt) and a semiconductor (Si) are considered in this study. Iso-enthalpic molecular dynamics simulations are used to compute caloric curves across the solid-to-liquid phase transition of a periodic crystalline system, to construct the free energy function of the solid and liquid phases, and thus to derive the thermodynamical limit of the melting point, latent heat and entropy of fusion of the material. The computational strategy used in this study yields accurate estimates of melting parameters, it consents to determine the superheating and supercooling temperature limits, and it gives access to the atomistic mechanisms mediating the melting process. In particular, it is found that the melting phase transition in simple solids is driven by exchange steps involving a few atoms and preserving the crystalline structure. These self-diffusion phenomena correspond to the elementary excitations of the communal entropy and, as their rate depends on the local material cohesivity, they mediate both the homogeneous and non-homogeneous melting process in simple solids.
Memory beyond memory in heart beating, a sign of a healthy physiological condition.
Allegrini, P; Grigolini, P; Hamilton, P; Palatella, L; Raffaelli, G
2002-04-01
We describe two types of memory and illustrate each using artificial and actual heartbeat data sets. The first type of memory, yielding anomalous diffusion, implies the inverse power-law nature of the waiting time distribution and the second the correlation among distinct times, and consequently also the occurrence of many pseudoevents, namely, not genuinely random events. Using the method of diffusion entropy analysis, we establish the scaling that would be determined by the real events alone. We prove that the heart beating of healthy patients reveals the existence of many more pseudoevents than in the patients with congestive heart failure.
Senary refractory high-entropy alloy HfNbTaTiVZr
Gao, Michael C.; Zhang, B.; Yang, S.; ...
2015-09-03
Discovery of new single-phase high-entropy alloys (HEAs) is important to understand HEA formation mechanisms. The present study reports computational design and experimental validation of a senary HEA, HfNbTaTiVZr, in a body-centered cubic structure. The phase diagrams and thermodynamic properties of this senary system were modeled using the CALPHAD method. Its atomic structure and diffusion constants were studied using ab initio molecular dynamics simulations. Here, the microstructure of the as-cast HfNbTaTiVZr alloy was studied using X-ray diffraction and scanning electron microscopy, and the microsegregation in the as-cast state was found to qualitatively agree with the solidification predictions from CALPHAD. Supported bymore » both simulation and experimental results, the HEA formation rules are discussed.« less
NASA Astrophysics Data System (ADS)
Mezzacappa, A.; Calder, A. C.; Bruenn, S. W.; Blondin, J. M.; Guidry, M. W.; Strayer, M. R.; Umar, A. S.
1998-01-01
We couple two-dimensional hydrodynamics to realistic one-dimensional multigroup flux-limited diffusion neutrino transport to investigate proto-neutron star convection in core-collapse supernovae, and more specifically, the interplay between its development and neutrino transport. Our initial conditions, time-dependent boundary conditions, and neutrino distributions for computing neutrino heating, cooling, and deleptonization rates are obtained from one-dimensional simulations that implement multigroup flux-limited diffusion and one-dimensional hydrodynamics. The development and evolution of proto-neutron star convection are investigated for both 15 and 25 M⊙ models, representative of the two classes of stars with compact and extended iron cores, respectively. For both models, in the absence of neutrino transport, the angle-averaged radial and angular convection velocities in the initial Ledoux unstable region below the shock after bounce achieve their peak values in ~20 ms, after which they decrease as the convection in this region dissipates. The dissipation occurs as the gradients are smoothed out by convection. This initial proto-neutron star convection episode seeds additional convectively unstable regions farther out beneath the shock. The additional proto-neutron star convection is driven by successive negative entropy gradients that develop as the shock, in propagating out after core bounce, is successively strengthened and weakened by the oscillating inner core. The convection beneath the shock distorts its sphericity, but on the average the shock radius is not boosted significantly relative to its radius in our corresponding one-dimensional models. In the presence of neutrino transport, proto-neutron star convection velocities are too small relative to bulk inflow velocities to result in any significant convective transport of entropy and leptons. This is evident in our two-dimensional entropy snapshots, which in this case appear spherically symmetric. The peak angle-averaged radial and angular convection velocities are orders of magnitude smaller than they are in the corresponding ``hydrodynamics-only'' models. A simple analytical model supports our numerical results, indicating that the inclusion of neutrino transport reduces the entropy-driven (lepton-driven) convection growth rates and asymptotic velocities by a factor ~3 (50) at the neutrinosphere and a factor ~250 (1000) at ρ = 1012 g cm-3, for both our 15 and 25 M⊙ models. Moreover, when transport is included, the initial postbounce entropy gradient is smoothed out by neutrino diffusion, whereas the initial lepton gradient is maintained by electron capture and neutrino escape near the neutrinosphere. Despite the maintenance of the lepton gradient, proto-neutron star convection does not develop over the 100 ms duration typical of all our simulations, except in the instance where ``low-test'' intial conditions are used, which are generated by core-collapse and bounce simulations that neglect neutrino-electron scattering and ion-ion screening corrections to neutrino-nucleus elastic scattering. Models favoring the development of proto-neutron star convection either by starting with more favorable, albeit artificial (low-test), initial conditions or by including transport corrections that were ignored in our ``fiducial'' models were considered. Our conclusions nonetheless remained the same. Evidence of proto-neutron star convection in our two-dimensional entropy snapshots was minimal, and, as in our fiducial models, the angle-averaged convective velocities when neutrino transport was included remained orders of magnitude smaller than their counterparts in the corresponding hydrodynamics-only models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Richa Naja, E-mail: ltprichanaja@gmail.com; Chakraborty, Brahmananda; Ramaniah, Lavanya M.
In this work our main objective is to compute Dynamical correlations, Onsager coefficients and Maxwell-Stefan (MS) diffusivities for molten salt LiF-KF mixture at various thermodynamic states through Green–Kubo formalism for the first time. The equilibrium molecular dynamics (MD) simulations were performed using BHM potential for LiF–KF mixture. The velocity autocorrelations functions involving Li ions reflect the endurance of cage dynamics or backscattering with temperature. The magnitude of Onsager coefficients for all pairs increases with increase in temperature. Interestingly most of the Onsager coefficients has almost maximum magnitude at the eutectic composition indicating the most dynamic character of the eutectic mixture.more » MS diffusivity hence diffusion for all ion pairs increases in the system with increasing temperature. Smooth variation of the diffusivity values denies any network formation in the mixture. Also, the striking feature is the noticeable concentration dependence of MS diffusivity between cation-cation pair, Đ{sub Li-K} which remains negative for most of the concentration range but changes sign to become positive for higher LiF concentration. The negative MS diffusivity is acceptable as it satisfies the non-negative entropy constraint governed by 2{sup nd} law of thermodynamics. This high diffusivity also vouches the candidature of molten salt as a coolant.« less
An efficient multi-resolution GA approach to dental image alignment
NASA Astrophysics Data System (ADS)
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
The Incremental Multiresolution Matrix Factorization Algorithm
Ithapu, Vamsi K.; Kondor, Risi; Johnson, Sterling C.; Singh, Vikas
2017-01-01
Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices – an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct “global” factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision. PMID:29416293
Community detection for fluorescent lifetime microscopy image segmentation
NASA Astrophysics Data System (ADS)
Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar
2014-03-01
Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
NASA Technical Reports Server (NTRS)
Weber, L. A.
1971-01-01
Thermophysical properties data for oxygen at pressures below 5000 psia have been extrapolated to higher pressures (5,000-10,000 psia) in the temperature range 100-600 R. The tables include density, entropy, enthalpy, internal energy, speed of sound, specific heat, thermal conductivity, viscosity, thermal diffusivity, Prandtl number, and dielectric constant.
Hands-on-Entropy, Energy Balance with Biological Relevance
NASA Astrophysics Data System (ADS)
Reeves, Mark
2015-03-01
Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is important contribution of the entropy in driving fundamental biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy). This has enabled students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce complex biological processes and structures in order model them mathematically to account for both deterministic and probabilistic processes. The students test these models in simulations and in laboratory experiments that are biologically relevant such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront random forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.
NASA Astrophysics Data System (ADS)
Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei
2017-07-01
Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.
NASA Astrophysics Data System (ADS)
Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.
2012-03-01
Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.
NASA Astrophysics Data System (ADS)
Reeves, Mark
2014-03-01
Entropy changes underlie the physics that dominates biological interactions. Indeed, introductory biology courses often begin with an exploration of the qualities of water that are important to living systems. However, one idea that is not explicitly addressed in most introductory physics or biology textbooks is dominant contribution of the entropy in driving important biological processes towards equilibrium. From diffusion to cell-membrane formation, to electrostatic binding in protein folding, to the functioning of nerve cells, entropic effects often act to counterbalance deterministic forces such as electrostatic attraction and in so doing, allow for effective molecular signaling. A small group of biology, biophysics and computer science faculty have worked together for the past five years to develop curricular modules (based on SCALEUP pedagogy) that enable students to create models of stochastic and deterministic processes. Our students are first-year engineering and science students in the calculus-based physics course and they are not expected to know biology beyond the high-school level. In our class, they learn to reduce seemingly complex biological processes and structures to be described by tractable models that include deterministic processes and simple probabilistic inference. The students test these models in simulations and in laboratory experiments that are biologically relevant. The students are challenged to bridge the gap between statistical parameterization of their data (mean and standard deviation) and simple model-building by inference. This allows the students to quantitatively describe realistic cellular processes such as diffusion, ionic transport, and ligand-receptor binding. Moreover, the students confront ``random'' forces and traditional forces in problems, simulations, and in laboratory exploration throughout the year-long course as they move from traditional kinematics through thermodynamics to electrostatic interactions. This talk will present a number of these exercises, with particular focus on the hands-on experiments done by the students, and will give examples of the tangible material that our students work with throughout the two-semester sequence of their course on introductory physics with a bio focus. Supported by NSF DUE.
Precipitation behavior of AlxCoCrFeNi high entropy alloys under ion irradiation
NASA Astrophysics Data System (ADS)
Yang, Tengfei; Xia, Songqin; Liu, Shi; Wang, Chenxu; Liu, Shaoshuai; Fang, Yuan; Zhang, Yong; Xue, Jianming; Yan, Sha; Wang, Yugang
2016-08-01
Materials performance is central to the satisfactory operation of current and future nuclear energy systems due to the severe irradiation environment in reactors. Searching for structural materials with excellent irradiation tolerance is crucial for developing the next generation nuclear reactors. Here, we report the irradiation responses of a novel multi-component alloy system, high entropy alloy (HEA) AlxCoCrFeNi (x = 0.1, 0.75 and 1.5), focusing on their precipitation behavior. It is found that the single phase system, Al0.1CoCrFeNi, exhibits a great phase stability against ion irradiation. No precipitate is observed even at the highest fluence. In contrast, numerous coherent precipitates are present in both multi-phase HEAs. Based on the irradiation-induced/enhanced precipitation theory, the excellent structural stability against precipitation of Al0.1CoCrFeNi is attributed to the high configurational entropy and low atomic diffusion, which reduces the thermodynamic driving force and kinetically restrains the formation of precipitate, respectively. For the multiphase HEAs, the phase separations and formation of ordered phases reduce the system configurational entropy, resulting in the similar precipitation behavior with corresponding binary or ternary conventional alloys. This study demonstrates the structural stability of single-phase HEAs under irradiation and provides important implications for searching for HEAs with higher irradiation tolerance.
Multiresolution With Super-Compact Wavelets
NASA Technical Reports Server (NTRS)
Lee, Dohyung
2000-01-01
The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of approximation. The advantages of the multiresolution algorithm are that no special treatment is required at the boundaries of the interval, and that the application to functions which are only piecewise continuous (internal boundaries) can be efficiently implemented. In this presentation, Beam's supercompact wavelets are generalized to higher dimensions using multidimensional scaling and wavelet functions rather than alternating the directions as in the 1D version. As a demonstration of actual 3D data compression, supercompact wavelet transforms are applied to a 3D data set for wing tip vortex flow solutions (2.5 million grid points). It is shown that high data compression ratio can be achieved (around 50:1 ratio) in both vector and scalar data set.
Topology-guided deformable registration with local importance preservation for biomedical images
NASA Astrophysics Data System (ADS)
Zheng, Chaojie; Wang, Xiuying; Zeng, Shan; Zhou, Jianlong; Yin, Yong; Feng, Dagan; Fulham, Michael
2018-01-01
The demons registration (DR) model is well recognized for its deformation capability. However, it might lead to misregistration due to erroneous diffusion direction when there are no overlaps between corresponding regions. We propose a novel registration energy function, introducing topology energy, and incorporating a local energy function into the DR in a progressive registration scheme, to address these shortcomings. The topology energy that is derived from the topological information of the images serves as a direction inference to guide diffusion transformation to retain the merits of DR. The local energy constrains the deformation disparity of neighbouring pixels to maintain important local texture and density features. The energy function is minimized in a progressive scheme steered by a topology tree graph and we refer to it as topology-guided deformable registration (TDR). We validated our TDR on 20 pairs of synthetic images with Gaussian noise, 20 phantom PET images with artificial deformations and 12 pairs of clinical PET-CT studies. We compared it to three methods: (1) free-form deformation registration method, (2) energy-based DR and (3) multi-resolution DR. The experimental results show that our TDR outperformed the other three methods in regard to structural correspondence and preservation of the local important information including texture and density, while retaining global correspondence.
Maxwell-Stefan diffusion and dynamical correlation in molten LiF-KF: A molecular dynamics study
NASA Astrophysics Data System (ADS)
Jain, Richa Naja; Chakraborty, Brahmananda; Ramaniah, Lavanya M.
2016-05-01
In this work our main objective is to compute Dynamical correlations, Onsager coefficients and Maxwell-Stefan (MS) diffusivities for molten salt LiF-KF mixture at various thermodynamic states through Green-Kubo formalism for the first time. The equilibrium molecular dynamics (MD) simulations were performed using BHM potential for LiF-KF mixture. The velocity autocorrelations functions involving Li ions reflect the endurance of cage dynamics or backscattering with temperature. The magnitude of Onsager coefficients for all pairs increases with increase in temperature. Interestingly most of the Onsager coefficients has almost maximum magnitude at the eutectic composition indicating the most dynamic character of the eutectic mixture. MS diffusivity hence diffusion for all ion pairs increases in the system with increasing temperature. Smooth variation of the diffusivity values denies any network formation in the mixture. Also, the striking feature is the noticeable concentration dependence of MS diffusivity between cation-cation pair, ĐLi-K which remains negative for most of the concentration range but changes sign to become positive for higher LiF concentration. The negative MS diffusivity is acceptable as it satisfies the non-negative entropy constraint governed by 2nd law of thermodynamics. This high diffusivity also vouches the candidature of molten salt as a coolant.
Inferring Diffusion Dynamics from FCS in Heterogeneous Nuclear Environments
Tsekouras, Konstantinos; Siegel, Amanda P.; Day, Richard N.; Pressé, Steve
2015-01-01
Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show—first using synthetic data—that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell’s nucleus as well as 2) in the cell’s cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. PMID:26153697
Li, Jia-Han; Webb, Kevin J; Burke, Gerald J; White, Daniel A; Thompson, Charles A
2006-05-01
A multiresolution direct binary search iterative procedure is used to design small dielectric irregular diffractive optical elements that have subwavelength features and achieve near-field focusing below the diffraction limit. Designs with a single focus or with two foci, depending on wavelength or polarization, illustrate the possible functionalities available from the large number of degrees of freedom. These examples suggest that the concept of such elements may find applications in near-field lithography, wavelength-division multiplexing, spectral analysis, and polarization beam splitters.
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
MR-CDF: Managing multi-resolution scientific data
NASA Technical Reports Server (NTRS)
Salem, Kenneth
1993-01-01
MR-CDF is a system for managing multi-resolution scientific data sets. It is an extension of the popular CDF (Common Data Format) system. MR-CDF provides a simple functional interface to client programs for storage and retrieval of data. Data is stored so that low resolution versions of the data can be provided quickly. Higher resolutions are also available, but not as quickly. By managing data with MR-CDF, an application can be relieved of the low-level details of data management, and can easily trade data resolution for improved access time.
A network of discrete events for the representation and analysis of diffusion dynamics.
Pintus, Alberto M; Pazzona, Federico G; Demontis, Pierfranco; Suffritti, Giuseppe B
2015-11-14
We developed a coarse-grained description of the phenomenology of diffusive processes, in terms of a space of discrete events and its representation as a network. Once a proper classification of the discrete events underlying the diffusive process is carried out, their transition matrix is calculated on the basis of molecular dynamics data. This matrix can be represented as a directed, weighted network where nodes represent discrete events, and the weight of edges is given by the probability that one follows the other. The structure of this network reflects dynamical properties of the process of interest in such features as its modularity and the entropy rate of nodes. As an example of the applicability of this conceptual framework, we discuss here the physics of diffusion of small non-polar molecules in a microporous material, in terms of the structure of the corresponding network of events, and explain on this basis the diffusivity trends observed. A quantitative account of these trends is obtained by considering the contribution of the various events to the displacement autocorrelation function.
Gaussian fluctuation of the diffusion exponent of virus capsid in a living cell nucleus
NASA Astrophysics Data System (ADS)
Itto, Yuichi
2018-05-01
In their work [4], Bosse et al. experimentally showed that virus capsid exhibits not only normal diffusion but also anomalous diffusion in nucleus of a living cell. There, it was found that the distribution of fluctuations of the diffusion exponent characterizing them takes the Gaussian form, which is, quite remarkably, the same form for two different types of the virus. This suggests high robustness of such fluctuations. Here, the statistical property of local fluctuations of the diffusion exponent of the virus capsid in the nucleus is studied. A maximum-entropy-principle approach (originally proposed for a different virus in a different cell) is applied for obtaining the fluctuation distribution of the exponent. Largeness of the number of blocks identified with local areas of interchromatin corrals is also examined based on the experimental data. It is shown that the Gaussian distribution of the local fluctuations can be derived, in accordance with the above form. In addition, it is quantified how the fluctuation distribution on a long time scale is different from the Gaussian distribution.
Preferential diffusion in concentrated solid solution alloys: NiFe, NiCo and NiCoCr
Zhao, Shijun; Osetsky, Yuri; Zhang, Yanwen
2017-02-13
In single-phase concentrated solid-solution alloys (CSAs), including high entropy alloys (HEAs), remarkable mechanical properties are exhibited, as well as extraordinary corrosion and radiation resistance compared to pure metals and dilute alloys. But, the mechanisms responsible for these properties are unknown in many cases. In this work, we employ ab initio molecular dynamics based on density functional theory to study the diffusion of interstitial atoms in Ni and Ni-based face-centered cubic CSAs including NiFe, NiCo and NiCoCr. We model the defect trajectories over >100 ps and estimate tracer diffusion coefficients, correlation factors and activation energies. Furthermore, we found that the diffusionmore » mass transport in CSAs is not only slower than that in pure components, i.e. sluggish diffusion, but also chemically non-homogeneous. The results obtained here can be used in understanding and predicting the atomic segregation and phase separation in CSAs under irradiation conditions.« less
Saltas, V.; Chroneos, A.; Cooper, Michael William D.; ...
2016-01-01
In the present work, the defect properties of oxygen self-diffusion in PuO 2 are investigated over a wide temperature (300–1900 K) and pressure (0–10 GPa) range, by combining molecular dynamics simulations and thermodynamic calculations. Based on the well-established cBΩ thermodynamic model which connects the activation Gibbs free energy of diffusion with the bulk elastic and expansion properties, various point defect parameters such as activation enthalpy, activation entropy, and activation volume were calculated as a function of T and P. Molecular dynamics calculations provided the necessary bulk properties for the proper implementation of the thermodynamic model, in the lack of anymore » relevant experimental data. The estimated compressibility and the thermal expansion coefficient of activation volume are found to be more than one order of magnitude greater than the corresponding values of the bulk plutonia. As a result, the diffusion mechanism is discussed in the context of the temperature and pressure dependence of the activation volume.« less
Nakajo, Masanori; Fukukura, Yoshihiko; Hakamada, Hiroto; Yoneyama, Tomohide; Kamimura, Kiyohisa; Nagano, Satoshi; Nakajo, Masayuki; Yoshiura, Takashi
2018-02-22
Apparent diffusion coefficient (ADC) histogram analyses have been used to differentiate tumor grades and predict therapeutic responses in various anatomic sites with moderate success. To determine the ability of diffusion-weighted imaging (DWI) with a whole-tumor ADC histogram analysis to differentiate benign peripheral neurogenic tumors (BPNTs) from soft tissue sarcomas (STSs). Retrospective study, single institution. In all, 25 BPNTs and 31 STSs. Two-b value DWI (b-values = 0, 1000s/mm 2 ) was at 3.0T. The histogram parameters of whole-tumor for ADC were calculated by two radiologists and compared between BPNTs and STSs. Nonparametric tests were performed for comparisons between BPNTs and STSs. P < 0.05 was considered statistically significant. The ability of each parameter to differentiate STSs from BPNTs was evaluated using area under the curve (AUC) values derived from a receiver operating characteristic curve analysis. The mean ADC and all percentile parameters were significantly lower in STSs than in BPNTs (P < 0.001-0.009), with AUCs of 0.703-0.773. However, the coefficient of variation (P = 0.020 and AUC = 0.682) and skewness (P = 0.012 and AUC = 0.697) were significantly higher in STSs than in BPNTs. Kurtosis (P = 0.295) and entropy (P = 0.604) did not differ significantly between BPNTs and STSs. Whole-tumor ADC histogram parameters except kurtosis and entropy differed significantly between BPNTs and STSs. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Li, Guanchen; von Spakovsky, Michael R
2016-09-01
This paper presents a nonequilibrium thermodynamic model for the relaxation of a local, isolated system in nonequilibrium using the principle of steepest entropy ascent (SEA), which can be expressed as a variational principle in thermodynamic state space. The model is able to arrive at the Onsager relations for such a system. Since no assumption of local equilibrium is made, the conjugate fluxes and forces are intrinsic to the subspaces of the system's state space and are defined using the concepts of hypoequilibrium state and nonequilibrium intensive properties, which describe the nonmutual equilibrium status between subspaces of the thermodynamic state space. The Onsager relations are shown to be a thermodynamic kinematic feature of the system independent of the specific details of the micromechanical dynamics. Two kinds of relaxation processes are studied with different constraints (i.e., conservation laws) corresponding to heat and mass diffusion. Linear behavior in the near-equilibrium region as well as nonlinear behavior in the far-from-equilibrium region are discussed. Thermodynamic relations in the equilibrium and near-equilibrium realm, including the Gibbs relation, the Clausius inequality, and the Onsager relations, are generalized to the far-from-equilibrium realm. The variational principle in the space spanned by the intrinsic conjugate fluxes and forces is expressed via the quadratic dissipation potential. As an application, the model is applied to the heat and mass diffusion of a system represented by a single-particle ensemble, which can also be applied to a simple system of many particles. Phenomenological transport coefficients are also derived in the near-equilibrium realm.
Shi, Yuping; Huang, Limin; Soh, Ai Kah; Weng, George J; Liu, Shuangyi; Redfern, Simon A T
2017-09-11
Electrocaloric (EC) materials show promise in eco-friendly solid-state refrigeration and integrable on-chip thermal management. While direct measurement of EC thin-films still remains challenging, a generic theoretical framework for quantifying the cooling properties of rich EC materials including normal-, relaxor-, organic- and anti-ferroelectrics is imperative for exploiting new flexible and room-temperature cooling alternatives. Here, we present a versatile theory that combines Master equation with Maxwell relations and analytically relates the macroscopic cooling responses in EC materials with the intrinsic diffuseness of phase transitions and correlation characteristics. Under increased electric fields, both EC entropy and adiabatic temperature changes increase quadratically initially, followed by further linear growth and eventual gradual saturation. The upper bound of entropy change (∆S max ) is limited by distinct correlation volumes (V cr ) and transition diffuseness. The linearity between V cr and the transition diffuseness is emphasized, while ∆S max = 300 kJ/(K.m 3 ) is obtained for Pb 0.8 Ba 0.2 ZrO 3 . The ∆S max in antiferroelectric Pb 0.95 Zr 0.05 TiO 3 , Pb 0.8 Ba 0.2 ZrO 3 and polymeric ferroelectrics scales proportionally with V cr -2.2 , owing to the one-dimensional structural constraint on lattice-scale depolarization dynamics; whereas ∆S max in relaxor and normal ferroelectrics scales as ∆S max ~ V cr -0.37 , which tallies with a dipolar interaction exponent of 2/3 in EC materials and the well-proven fractional dimensionality of 2.5 for ferroelectric domain walls.
NASA Astrophysics Data System (ADS)
Qian, Hong; Kjelstrup, Signe; Kolomeisky, Anatoly B.; Bedeaux, Dick
2016-04-01
Nonequilibrium thermodynamics (NET) investigates processes in systems out of global equilibrium. On a mesoscopic level, it provides a statistical dynamic description of various complex phenomena such as chemical reactions, ion transport, diffusion, thermochemical, thermomechanical and mechanochemical fluxes. In the present review, we introduce a mesoscopic stochastic formulation of NET by analyzing entropy production in several simple examples. The fundamental role of nonequilibrium steady-state cycle kinetics is emphasized. The statistical mechanics of Onsager’s reciprocal relations in this context is elucidated. Chemomechanical, thermomechanical, and enzyme-catalyzed thermochemical energy transduction processes are discussed. It is argued that mesoscopic stochastic NET in phase space provides a rigorous mathematical basis of fundamental concepts needed for understanding complex processes in chemistry, physics and biology. This theory is also relevant for nanoscale technological advances.
NASA Astrophysics Data System (ADS)
Santillán, Moisés; Qian, Hong
2013-01-01
We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.
Thorneywork, Alice L; Rozas, Roberto E; Dullens, Roel P A; Horbach, Jürgen
2015-12-31
We compare experimental results from a quasi-two-dimensional colloidal hard sphere fluid to a Monte Carlo simulation of hard disks with small particle displacements. The experimental short-time self-diffusion coefficient D(S) scaled by the diffusion coefficient at infinite dilution, D(0), strongly depends on the area fraction, pointing to significant hydrodynamic interactions at short times in the experiment, which are absent in the simulation. In contrast, the area fraction dependence of the experimental long-time self-diffusion coefficient D(L)/D(0) is in quantitative agreement with D(L)/D(0) obtained from the simulation. This indicates that the reduction in the particle mobility at short times due to hydrodynamic interactions does not lead to a proportional reduction in the long-time self-diffusion coefficient. Furthermore, the quantitative agreement between experiment and simulation at long times indicates that hydrodynamic interactions effectively do not affect the dependence of D(L)/D(0) on the area fraction. In light of this, we discuss the link between structure and long-time self-diffusion in terms of a configurational excess entropy and do not find a simple exponential relation between these quantities for all fluid area fractions.
Theory of Transport of Long Polymer Molecules through Carbon Nanotube Channels
NASA Technical Reports Server (NTRS)
Wei, Chenyu; Srivastava, Deepak
2003-01-01
A theory of transport of long chain polymer molecules through carbon nanotube (CNT) channels is developed using Fokker-Planck equation and direct molecular dynamics (MD) simulations. The mean transport or translocation time tau is found to depend on the chemical potential energy, entropy and diffusion coefficient. A power law dependence tau approx. N(sup 2)is found where N is number of monomers in a molecule. For 10(exp 5)-unit long polyethylene molecules, tau is estimated to be approx. 1micro-s. The diffusion coefficient of long polymer molecules inside CNTs, like that of short ones, are found to be few orders of magnitude larger than in ordinary silicate based zeolite systems.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime
2017-04-01
Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic. Moreover, in accordance with other previous studies, the wavelet components detected in SLP and precipitation on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation. Current works are now conducted including SST over the Atlantic in order to get further insights into this mechanism.
Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean
2015-01-01
At present, there is very limited information on the ecology, distribution, and structure of Cambodia's tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman's rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables.
An ROI multi-resolution compression method for 3D-HEVC
NASA Astrophysics Data System (ADS)
Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan
2017-09-01
3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.
Singh, Minerva; Evans, Damian; Tan, Boun Suy; Nin, Chan Samean
2015-01-01
At present, there is very limited information on the ecology, distribution, and structure of Cambodia’s tree species to warrant suitable conservation measures. The aim of this study was to assess various methods of analysis of aerial imagery for characterization of the forest mensuration variables (i.e., tree height and crown width) of selected tree species found in the forested region around the temples of Angkor Thom, Cambodia. Object-based image analysis (OBIA) was used (using multiresolution segmentation) to delineate individual tree crowns from very-high-resolution (VHR) aerial imagery and light detection and ranging (LiDAR) data. Crown width and tree height values that were extracted using multiresolution segmentation showed a high level of congruence with field-measured values of the trees (Spearman’s rho 0.782 and 0.589, respectively). Individual tree crowns that were delineated from aerial imagery using multiresolution segmentation had a high level of segmentation accuracy (69.22%), whereas tree crowns delineated using watershed segmentation underestimated the field-measured tree crown widths. Both spectral angle mapper (SAM) and maximum likelihood (ML) classifications were applied to the aerial imagery for mapping of selected tree species. The latter was found to be more suitable for tree species classification. Individual tree species were identified with high accuracy. Inclusion of textural information further improved species identification, albeit marginally. Our findings suggest that VHR aerial imagery, in conjunction with OBIA-based segmentation methods (such as multiresolution segmentation) and supervised classification techniques are useful for tree species mapping and for studies of the forest mensuration variables. PMID:25902148
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringler, Todd; Ju, Lili; Gunzburger, Max
2008-11-14
During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoimore » tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.« less
Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound
NASA Astrophysics Data System (ADS)
Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.
2015-12-01
Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-03
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.
Entropy-based separation of yeast cells using a microfluidic system of conjoined spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kai-Jian; Qin, S.-J., E-mail: shuijie.qin@gmail.com; Bai, Zhong-Chen
2013-11-21
A physical model is derived to create a biological cell separator that is based on controlling the entropy in a microfluidic system having conjoined spherical structures. A one-dimensional simplified model of this three-dimensional problem in terms of the corresponding effects of entropy on the Brownian motion of particles is presented. This dynamic mechanism is based on the Langevin equation from statistical thermodynamics and takes advantage of the characteristics of the Fokker-Planck equation. This mechanism can be applied to manipulate biological particles inside a microfluidic system with identical, conjoined, spherical compartments. This theoretical analysis is verified by performing a rapid andmore » a simple technique for separating yeast cells in these conjoined, spherical microfluidic structures. The experimental results basically match with our theoretical model and we further analyze the parameters which can be used to control this separation mechanism. Both numerical simulations and experimental results show that the motion of the particles depends on the geometrical boundary conditions of the microfluidic system and the initial concentration of the diffusing material. This theoretical model can be implemented in future biophysics devices for the optimized design of passive cell sorters.« less
Dix, James A.; Diamond, Jared M.; Kivelson, Daniel
1974-01-01
The translational diffusion coefficient and the partition coefficient of a spin-labeled solute, di-t-butyl nitroxide, in an aqueous suspension of dipalmitoyl lecithin vesicles have been studied by electron spin resonance spectroscopy. When the lecithin is cooled through its phase transition temperature near 41°C, some solute is “frozen out” of the bilayer, and the standard partial molar enthalpy and entropy of partition go more positive by a factor of 8 and 6, respectively. However, the apparent diffusion constant in the lecithin phase is only slightly smaller than that in water, both above and below the transition temperature. The fraction of bilayer volume within which solute is distributed may increase with temperature, contributing to the positive enthalpy of partition. Comparison of time constants suggests that there is a permeability barrier to this solute in the periphery of the bilayer. PMID:4360944
Parallel object-oriented, denoising system using wavelet multiresolution analysis
Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.
2005-04-12
The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.
NASA Astrophysics Data System (ADS)
Bower, Dan J.; Sanan, Patrick; Wolf, Aaron S.
2018-01-01
The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. Crucially, in this formulation the effective or eddy diffusivity depends on the entropy gradient, ∂S / ∂r , as well as entropy itself. First we present a simplified model with semi-analytical solutions that highlights the large dynamic range of ∂S / ∂r -around 12 orders of magnitude-for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple yet stable numerical scheme able to capture the large dynamic range of ∂S / ∂r and hence provide a flexible and robust method for time-integrating the energy equation. Using insight gained from the simplified model, we consider a full model, which includes energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using ∂S / ∂r as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. The computational framework we devise is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations that arise in other science and engineering disciplines, particularly for non-linear functional forms of the diffusion coefficient.
Reese, Timothy G.; Jackowski, Marcel P.; Cauley, Stephen F.; Setsompop, Kawin; Bhat, Himanshu; Sosnovik, David E.
2017-01-01
Purpose To develop a clinically feasible whole-heart free-breathing diffusion-tensor (DT) magnetic resonance (MR) imaging approach with an imaging time of approximately 15 minutes to enable three-dimensional (3D) tractography. Materials and Methods The study was compliant with HIPAA and the institutional review board and required written consent from the participants. DT imaging was performed in seven healthy volunteers and three patients with pulmonary hypertension by using a stimulated echo sequence. Twelve contiguous short-axis sections and six four-chamber sections that covered the entire left ventricle were acquired by using simultaneous multisection (SMS) excitation with a blipped-controlled aliasing in parallel imaging readout. Rate 2 and rate 3 SMS excitation was defined as two and three times accelerated in the section axis, respectively. Breath-hold and free-breathing images with and without SMS acceleration were acquired. Diffusion-encoding directions were acquired sequentially, spatiotemporally registered, and retrospectively selected by using an entropy-based approach. Myofiber helix angle, mean diffusivity, fractional anisotropy, and 3D tractograms were analyzed by using paired t tests and analysis of variance. Results No significant differences (P > .63) were seen between breath-hold rate 3 SMS and free-breathing rate 2 SMS excitation in transmural myofiber helix angle, mean diffusivity (mean ± standard deviation, [0.89 ± 0.09] × 10−3 mm2/sec vs [0.9 ± 0.09] × 10−3 mm2/sec), or fractional anisotropy (0.43 ± 0.05 vs 0.42 ± 0.06). Three-dimensional tractograms of the left ventricle with no SMS and rate 2 and rate 3 SMS excitation were qualitatively similar. Conclusion Free-breathing DT imaging of the entire human heart can be performed in approximately 15 minutes without section gaps by using SMS excitation with a blipped-controlled aliasing in parallel imaging readout, followed by spatiotemporal registration and entropy-based retrospective image selection. This method may lead to clinical translation of whole-heart DT imaging, enabling broad application in patients with cardiac disease. © RSNA, 2016 Online supplemental material is available for this article. PMID:27681278
Mekkaoui, Choukri; Reese, Timothy G; Jackowski, Marcel P; Cauley, Stephen F; Setsompop, Kawin; Bhat, Himanshu; Sosnovik, David E
2017-03-01
Purpose To develop a clinically feasible whole-heart free-breathing diffusion-tensor (DT) magnetic resonance (MR) imaging approach with an imaging time of approximately 15 minutes to enable three-dimensional (3D) tractography. Materials and Methods The study was compliant with HIPAA and the institutional review board and required written consent from the participants. DT imaging was performed in seven healthy volunteers and three patients with pulmonary hypertension by using a stimulated echo sequence. Twelve contiguous short-axis sections and six four-chamber sections that covered the entire left ventricle were acquired by using simultaneous multisection (SMS) excitation with a blipped-controlled aliasing in parallel imaging readout. Rate 2 and rate 3 SMS excitation was defined as two and three times accelerated in the section axis, respectively. Breath-hold and free-breathing images with and without SMS acceleration were acquired. Diffusion-encoding directions were acquired sequentially, spatiotemporally registered, and retrospectively selected by using an entropy-based approach. Myofiber helix angle, mean diffusivity, fractional anisotropy, and 3D tractograms were analyzed by using paired t tests and analysis of variance. Results No significant differences (P > .63) were seen between breath-hold rate 3 SMS and free-breathing rate 2 SMS excitation in transmural myofiber helix angle, mean diffusivity (mean ± standard deviation, [0.89 ± 0.09] × 10 -3 mm 2 /sec vs [0.9 ± 0.09] × 10 -3 mm 2 /sec), or fractional anisotropy (0.43 ± 0.05 vs 0.42 ± 0.06). Three-dimensional tractograms of the left ventricle with no SMS and rate 2 and rate 3 SMS excitation were qualitatively similar. Conclusion Free-breathing DT imaging of the entire human heart can be performed in approximately 15 minutes without section gaps by using SMS excitation with a blipped-controlled aliasing in parallel imaging readout, followed by spatiotemporal registration and entropy-based retrospective image selection. This method may lead to clinical translation of whole-heart DT imaging, enabling broad application in patients with cardiac disease. © RSNA, 2016 Online supplemental material is available for this article.
Understanding of the Elemental Diffusion Behavior in Concentrated Solid Solution Alloys
Zhang, Chuan; Zhang, Fan; Jin, Ke; ...
2017-07-13
As one of the core effects on the high-temperature structural stability, the so-called “sluggish diffusion effect” in high-entropy alloy (HEA) has attracted much attention. Experimental investigations on the diffusion kinetics have been carried out in a few HEA systems, such as Al-Co-Cr-Fe-Ni and Co-Cr-Fe-Mn-Ni. However, the mechanisms behind this effect remain unclear. To better understand the diffusion kinetics of the HEAs, a combined computational/experimental approach is employed in the current study. In the present work, a self-consistent atomic mobility database is developed for the face-centered cubic (fcc) phase of the Co-Cr-Fe-Mn-Ni quinary system. The simulated diffusion coefficients and concentration profilesmore » using this database can well describe the experimental data both from this work and the literatures. The validated mobility database is then used to calculate the tracer diffusion coefficients of Ni in the subsystems of the Co-Cr-Fe-Mn-Ni system with equiatomic ratios. The comparisons of these calculated diffusion coefficients reveal that the diffusion of Ni is not inevitably more sluggish with increasing number of components in the subsystem even with homologous temperature. Taking advantage of computational thermodynamics, the diffusivities of alloying elements with composition and/or temperature are also calculated. Furthermore, these calculations provide us an overall picture of the diffusion kinetics within the Co-Cr-Fe-Mn-Ni system.« less
Asymmetry dependence of the caloric curve for mononuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoel, C.; Sobotka, L. G.; Charity, R. J.
2007-01-15
The asymmetry dependence of the caloric curve, for mononuclear configurations, is studied as a function of neutron-to-proton asymmetry with a model that allows for independent variation of the neutron and proton surface diffusenesses. The evolution of the effective mass with density and excitation is included in a schematic fashion and the entropies are extracted in a local density approximation. The plateau in the caloric curve displays only a slight sensitivity to the asymmetry.
Inferring diffusion dynamics from FCS in heterogeneous nuclear environments.
Tsekouras, Konstantinos; Siegel, Amanda P; Day, Richard N; Pressé, Steve
2015-07-07
Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show-first using synthetic data-that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell's nucleus as well as 2) in the cell's cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Patel, H C; Tokarski, J S; Hopfinger, A J
1997-10-01
The purpose of this study was to identify the key physicochemical molecular properties of polymeric materials responsible for gaseous diffusion in the polymers. Quantitative structure-property relationships, QSPRs were constructed using a genetic algorithm on a training set of 16 polymers for which CO2, N2, O2 diffusion constants were measured. Nine physicochemical properties of each of the polymers were used in the trial basis set for QSPR model construction. The linear cross-correlation matrices were constructed and investigated for colinearity among the members of the training sets. Common water diffusion measures for a limited training set of six polymers was used to construct a "semi-QSPR" model. The bulk modulus of the polymer was overwhelmingly found to be the dominant physicochemical polymer property that governs CO2, N2 and O2 diffusion. Some secondary physicochemical properties controlling diffusion, including conformational entropy, were also identified as correlation descriptors. Very significant QSPR diffusion models were constructed for all three gases. Cohesive energy was identified as the main correlation physicochemical property with aqueous diffusion measures. The dominant role of polymer bulk modulus on gaseous diffusion makes it difficult to develop criteria for selective transport of gases through polymers. Moreover, high bulk moduli are predicted to be necessary for effective gas barrier materials. This property requirement may limit the processing and packaging features of the material. Aqueous diffusion in polymers may occur by a different mechanism than gaseous diffusion since bulk modulus does not correlate with aqueous diffusion, but rather cohesive energy of the polymer.
On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center
NASA Technical Reports Server (NTRS)
Sturner, Steven J.
2000-01-01
We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.
Detecting Blind Fault with Fractal and Roughness Factors from High Resolution LiDAR DEM at Taiwan
NASA Astrophysics Data System (ADS)
Cheng, Y. S.; Yu, T. T.
2014-12-01
There is no obvious fault scarp associated with blind fault. The traditional method of mapping this unrevealed geological structure is the cluster of seismicity. Neither the seismic event nor the completeness of cluster could be captured by network to chart the location of the entire possible active blind fault within short period of time. High resolution DEM gathered by LiDAR could denote actual terrain information despite the existence of plantation. 1-meter interval DEM of mountain region at Taiwan is utilized by fractal, entropy and roughness calculating with MATLAB code. By jointing these handing, the regions of non-sediment deposit are charted automatically. Possible blind fault associated with Chia-Sen earthquake at southern Taiwan is served as testing ground. GIS layer help in removing the difference from various geological formation, then multi-resolution fractal index is computed around the target region. The type of fault movement controls distribution of fractal index number. The scale of blind fault governs degree of change in fractal index. Landslide induced by rainfall and/or earthquake possesses larger degree of geomorphology alteration than blind fault; special treatment in removing these phenomena is required. Highly weathered condition at Taiwan should erase the possible trace remained upon DEM from the ruptured of blind fault while reoccurrence interval is higher than hundreds of years. This is one of the obstacle in finding possible blind fault at Taiwan.
Entropic Analysis of Electromyography Time Series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Sung, Paul
2005-03-01
We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.
Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system
NASA Astrophysics Data System (ADS)
Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo
2014-05-01
Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Isolator-combustor interaction in a dual-mode scramjet engine
NASA Technical Reports Server (NTRS)
Pratt, David T.; Heiser, William H.
1993-01-01
A constant-area diffuser, or 'isolator', is required in both the ramjet and scramjet operating regimes of a dual-mode engine configuration in order to prevent unstarts due to pressure feedback from the combustor. Because the nature of the combustor-isolator interaction is different in the two operational modes, however, attention is presently given to the use of thermal vs kinetic energy coordinates for these interaction processes' visualization. The results of the analysis thus conducted indicate that the isolator requires severe flow separation at combustor entry, and that its entropy-generating characteristics are more severe than an equivalent oblique shock. A constant-area diffuser is only marginally able to contain the equivalent normal shock required for subsonic combustor entry.
Discrete and continuum links to a nonlinear coupled transport problem of interacting populations
NASA Astrophysics Data System (ADS)
Duong, M. H.; Muntean, A.; Richardson, O. M.
2017-07-01
We are interested in exploring interacting particle systems that can be seen as microscopic models for a particular structure of coupled transport flux arising when different populations are jointly evolving. The scenarios we have in mind are inspired by the dynamics of pedestrian flows in open spaces and are intimately connected to cross-diffusion and thermo-diffusion problems holding a variational structure. The tools we use include a suitable structure of the relative entropy controlling TV-norms, the construction of Lyapunov functionals and particular closed-form solutions to nonlinear transport equations, a hydrodynamics limiting procedure due to Philipowski, as well as the construction of numerical approximates to both the continuum limit problem in 2D and to the original interacting particle systems.
Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey
2018-05-01
Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.
A theory for fracture of polymeric gels
NASA Astrophysics Data System (ADS)
Mao, Yunwei; Anand, Lallit
2018-06-01
A polymeric gel is a cross-linked polymer network swollen with a solvent. If the concentration of the solvent or the deformation is increased to substantial levels, especially in the presence of flaws, then the gel may rupture. Although various theoretical aspects of coupling of fluid permeation with large deformation of polymeric gels are reasonably well-understood and modeled in the literature, the understanding and modeling of the effects of fluid diffusion on the damage and fracture of polymeric gels is still in its infancy. In this paper we formulate a thermodynamically-consistent theory for fracture of polymeric gels - a theory which accounts for the coupled effects of fluid diffusion, large deformations, damage, and also the gradient effects of damage. The particular constitutive equations for fracture of a gel proposed in our paper, contain two essential new ingredients: (i) Our constitutive equation for the change in free energy of a polymer network accounts for not only changes in the entropy, but also changes in the internal energy due the stretching of the Kuhn segments of the polymer chains in the network. (ii) The damage and failure of the polymer network is taken to occur by chain-scission, a process which is driven by the changes in the internal energy of the stretched polymer chains in the network, and not directly by changes in the configurational entropy of the polymer chains. The theory developed in this paper is numerically implemented in an open-source finite element code MOOSE, by writing our own application. Using this simulation capability we report on our study of the fracture of a polymeric gel, and some interesting phenomena which show the importance of the diffusion of the fluid on fracture response of the gel are highlighted.
Marsh, Lorraine
2015-01-01
Many systems in biology rely on binding of ligands to target proteins in a single high-affinity conformation with a favorable ΔG. Alternatively, interactions of ligands with protein regions that allow diffuse binding, distributed over multiple sites and conformations, can exhibit favorable ΔG because of their higher entropy. Diffuse binding may be biologically important for multidrug transporters and carrier proteins. A fine-grained computational method for numerical integration of total binding ΔG arising from diffuse regional interaction of a ligand in multiple conformations using a Markov Chain Monte Carlo (MCMC) approach is presented. This method yields a metric that quantifies the influence on overall ligand affinity of ligand binding to multiple, distinct sites within a protein binding region. This metric is essentially a measure of dispersion in equilibrium ligand binding and depends on both the number of potential sites of interaction and the distribution of their individual predicted affinities. Analysis of test cases indicates that, for some ligand/protein pairs involving transporters and carrier proteins, diffuse binding contributes greatly to total affinity, whereas in other cases the influence is modest. This approach may be useful for studying situations where "nonspecific" interactions contribute to biological function.
A thermodynamic framework for thermo-chemo-elastic interactions in chemically active materials
NASA Astrophysics Data System (ADS)
Zhang, XiaoLong; Zhong, Zheng
2017-08-01
In this paper, a general thermodynamic framework is developed to describe the thermo-chemo-mechanical interactions in elastic solids undergoing mechanical deformation, imbibition of diffusive chemical species, chemical reactions and heat exchanges. Fully coupled constitutive relations and evolving laws for irreversible fluxes are provided based on entropy imbalance and stoichiometry that governs reactions. The framework manifests itself with a special feature that the change of Helmholtz free energy is attributed to separate contributions of the diffusion-swelling process and chemical reaction-dilation process. Both the extent of reaction and the concentrations of diffusive species are taken as independent state variables, which describe the reaction-activated responses with underlying variation of microstructures and properties of a material in an explicit way. A specialized isothermal formulation for isotropic materials is proposed that can properly account for volumetric constraints from material incompressibility under chemo-mechanical loadings, in which inhomogeneous deformation is associated with reaction and diffusion under various kinetic time scales. This framework can be easily applied to model the transient volumetric swelling of a solid caused by imbibition of external chemical species and simultaneous chemical dilation arising from reactions between the diffusing species and the solid.
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
Active pixel sensor array with multiresolution readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.
Multiscale geometric modeling of macromolecules II: Lagrangian representation
Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2013-01-01
Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599
Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z
2014-01-01
Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media
NASA Astrophysics Data System (ADS)
Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo
2016-04-01
Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Saurav, E-mail: saurav7188@gmail.com, E-mail: cyz118212@chemistry.iitd.ac.in; Chakravarty, Charusita
Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl–H{sub 2}O and bulk SPC/E water spanning the concentration range 0.025–0.300 molefraction of LiCl at 1more » atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl–H{sub 2}O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H{sub 2}O and LiCl–H{sub 2}O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O–O–O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.« less
NASA Astrophysics Data System (ADS)
Prasad, Saurav; Chakravarty, Charusita
2016-06-01
Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl-H2O and bulk SPC/E water spanning the concentration range 0.025-0.300 molefraction of LiCl at 1 atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl-H2O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H2O and LiCl-H2O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O-O-O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.
NASA Astrophysics Data System (ADS)
Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.
2017-03-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, William; Zucker, Jeremy; Baxter, Douglas
We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ODE-based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution, (2) the predicted metabolite concentrations are compared to those generally expected from experiment using a loss function from which post-translational regulation of enzymes is inferred, (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrationsmore » and reaction fluxes, and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1, 6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjogreen, B.; Sandham, N. D.; Hadjadj, A.; Kwak, Dochan (Technical Monitor)
2000-01-01
In a series of papers, Olsson (1994, 1995), Olsson & Oliger (1994), Strand (1994), Gerritsen Olsson (1996), Yee et al. (1999a,b, 2000) and Sandham & Yee (2000), the issue of nonlinear stability of the compressible Euler and Navier-Stokes Equations, including physical boundaries, and the corresponding development of the discrete analogue of nonlinear stable high order schemes, including boundary schemes, were developed, extended and evaluated for various fluid flows. High order here refers to spatial schemes that are essentially fourth-order or higher away from shock and shear regions. The objective of this paper is to give an overview of the progress of the low dissipative high order shock-capturing schemes proposed by Yee et al. (1999a,b, 2000). This class of schemes consists of simple non-dissipative high order compact or non-compact central spatial differencings and adaptive nonlinear numerical dissipation operators to minimize the use of numerical dissipation. The amount of numerical dissipation is further minimized by applying the scheme to the entropy splitting form of the inviscid flux derivatives, and by rewriting the viscous terms to minimize odd-even decoupling before the application of the central scheme (Sandham & Yee). The efficiency and accuracy of these scheme are compared with spectral, TVD and fifth- order WENO schemes. A new approach of Sjogreen & Yee (2000) utilizing non-orthogonal multi-resolution wavelet basis functions as sensors to dynamically determine the appropriate amount of numerical dissipation to be added to the non-dissipative high order spatial scheme at each grid point will be discussed. Numerical experiments of long time integration of smooth flows, shock-turbulence interactions, direct numerical simulations of a 3-D compressible turbulent plane channel flow, and various mixing layer problems indicate that these schemes are especially suitable for practical complex problems in nonlinear aeroacoustics, rotorcraft dynamics, direct numerical simulation or large eddy simulation of compressible turbulent flows at various speeds including high-speed shock-turbulence interactions, and general long time wave propagation problems. These schemes, including entropy splitting, have also been extended to freestream preserving schemes on curvilinear moving grids for a thermally perfect gas (Vinokur & Yee 2000).
Allnér, Olof; Foloppe, Nicolas; Nilsson, Lennart
2015-01-22
Molecular dynamics simulations of E. coli glutaredoxin1 in water have been performed to relate the dynamical parameters and entropy obtained in NMR relaxation experiments, with results extracted from simulated trajectory data. NMR relaxation is the most widely used experimental method to obtain data on dynamics of proteins, but it is limited to relatively short timescales and to motions of backbone amides or in some cases (13)C-H vectors. By relating the experimental data to the all-atom picture obtained in molecular dynamics simulations, valuable insights on the interpretation of the experiment can be gained. We have estimated the internal dynamics and their timescales by calculating the generalized order parameters (O) for different time windows. We then calculate the quasiharmonic entropy (S) and compare it to the entropy calculated from the NMR-derived generalized order parameter of the amide vectors. Special emphasis is put on characterizing dynamics that are not expressed through the motions of the amide group. The NMR and MD methods suffer from complementary limitations, with NMR being restricted to local vectors and dynamics on a timescale determined by the rotational diffusion of the solute, while in simulations, it may be difficult to obtain sufficient sampling to ensure convergence of the results. We also evaluate the amount of sampling obtained with molecular dynamics simulations and how it is affected by the length of individual simulations, by clustering of the sampled conformations. We find that two structural turns act as hinges, allowing the α helix between them to undergo large, long timescale motions that cannot be detected in the time window of the NMR dipolar relaxation experiments. We also show that the entropy obtained from the amide vector does not account for correlated motions of adjacent residues. Finally, we show that the sampling in a total of 100 ns molecular dynamics simulation can be increased by around 50%, by dividing the trajectory into 10 replicas with different starting velocities.
NASA Astrophysics Data System (ADS)
Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.
2018-01-01
We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
On the Stability of Shocks with Particle Pressure
NASA Astrophysics Data System (ADS)
Finazzi, Stefano; Vietri, Mario
2008-11-01
We perform a linear stability analysis for corrugations of a Newtonian shock, with particle pressure included, for an arbitrary diffusion coefficient. We study first the dispersion relation for homogeneous media, showing that, besides the conventional pressure waves and entropy/vorticity disturbances, two new perturbation modes exist, dominated by the particles' pressure and damped by diffusion. We show that, due to particle diffusion into the upstream region, the fluid will be perturbed also upstream; we treat these perturbation in the short-wavelength (WKBJ) regime. We then show how to construct a corrugational mode for the shock itself, one, that is, where the shock executes free oscillations (possibly damped or growing) and sheds perturbations away from itself; this global mode requires the new modes. Then, using the perturbed Rankine-Hugoniot conditions, we show that this leads to the determination of the corrugational eigenfrequency. We solve numerically the equations for the eigenfrequency in the WKBJ regime for the models of Amato & Blasi, showing that they are stable. We then discuss the differences between our treatment and previous work.
NASA Astrophysics Data System (ADS)
Gardiner, Thomas
2013-10-01
Anisotropic thermal diffusion in magnetized plasmas is an important physical phenomena for a diverse set of physical conditions ranging from astrophysical plasmas to MFE and ICF. Yet numerically simulating this phenomenon accurately poses significant challenges when the computational mesh is misaligned with respect to the magnetic field. Particularly when the temperature gradients are unresolved, one frequently finds entropy violating solutions with heat flowing from cold to hot zones for χ∥ /χ⊥ >=102 which is substantially smaller than the range of interest which can reach 1010 or higher. In this talk we present a new implicit algorithm for solving the anisotropic thermal diffusion equations and demonstrate its characteristics on what has become a fairly standard set of test problems in the literature. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2013-5687A.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.
1998-02-01
We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.
NASA Astrophysics Data System (ADS)
Li, Bao Qiong; Wang, Xue; Li Xu, Min; Zhai, Hong Lin; Chen, Jing; Liu, Jin Jin
2018-01-01
Fluorescence spectroscopy with an excitation-emission matrix (EEM) is a fast and inexpensive technique and has been applied to the detection of a very wide range of analytes. However, serious scattering and overlapping signals hinder the applications of EEM spectra. In this contribution, the multi-resolution capability of Tchebichef moments was investigated in depth and applied to the analysis of two EEM data sets (data set 1 consisted of valine-tyrosine-valine, tryptophan-glycine and phenylalanine, and data set 2 included vitamin B1, vitamin B2 and vitamin B6) for the first time. By means of the Tchebichef moments with different orders, the different information in the EEM spectra can be represented. It is owing to this multi-resolution capability that the overlapping problem was solved, and the information of chemicals and scatterings were separated. The obtained results demonstrated that the Tchebichef moment method is very effective, which provides a promising tool for the analysis of EEM spectra. It is expected that the applications of Tchebichef moment method could be developed and extended in complex systems such as biological fluids, food, environment and others to deal with the practical problems (overlapped peaks, unknown interferences, baseline drifts, and so on) with other spectra.
Wavelet processing techniques for digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less
Estimating Thermal Inertia with a Maximum Entropy Boundary Condition
NASA Astrophysics Data System (ADS)
Nearing, G.; Moran, M. S.; Scott, R.; Ponce-Campos, G.
2012-04-01
Thermal inertia, P [Jm-2s-1/2K-1], is a physical property the land surface which determines resistance to temperature change under seasonal or diurnal heating. It is a function of volumetric heat capacity, c [Jm-3K-1], and thermal conductivity, k [Wm-1K-1] of the soil near the surface: P=√ck. Thermal inertia of soil varies with moisture content due the difference between thermal properties of water and air, and a number of studies have demonstrated that it is feasible to estimate soil moisture given thermal inertia (e.g. Lu et al, 2009, Murray and Verhoef, 2007). We take the common approach to estimating thermal inertia using measurements of surface temperature by modeling the Earth's surface as a 1-dimensional homogeneous diffusive half-space. In this case, surface temperature is a function of the ground heat flux (G) boundary condition and thermal inertia and a daily value of P was estimated by matching measured and modeled diurnal surface temperature fluctuations. The difficulty is in measuring G; we demonstrate that the new maximum entropy production (MEP) method for partitioning net radiation into surface energy fluxes (Wang and Bras, 2011) provides a suitable boundary condition for estimating P. Adding the diffusion representation of heat transfer in the soil reduces the number of free parameters in the MEP model from two to one, and we provided a sensitivity analysis which suggests that, for the purpose of estimating P, it is preferable to parameterize the coupled MEP-diffusion model by the ratio of thermal inertia of the soil to the effective thermal inertia of convective heat transfer to the atmosphere. We used this technique to estimate thermal inertia at two semiarid, non-vegetated locations in the Walnut Gulch Experimental Watershed in southeast AZ, USA and compared these estimates to estimates of P made using the Xue and Cracknell (1995) solution for a linearized ground heat flux boundary condition, and we found that the MEP-diffusion model produced superior thermal inertia estimates. The MEP-diffusion estimates also agreed well with P estimates made using a boundary condition measured with buried flux plates. We further demonstrated the new method using diurnal surface temperature fluctuations estimated from day/night MODIS image pairs and, excluding instances where the soil was extremely dry, found a strong relationship between estimated thermal inertia and measured 5 cm soil moisture. Lu, S., Ju, Z.Q., Ren, T.S. & Horton, R. (2009). A general approach to estimate soil water content from thermal inertia. Agricultural and Forest Meteorology, 149, 1693-1698. Murray, T. & Verhoef, A. (2007). Moving towards a more mechanistic approach in the determination of soil heat flux from remote measurements - I. A universal approach to calculate thermal inertia. Agricultural and Forest Meteorology, 147, 80-87. Wang, J.F. & Bras, R.L. (2011). A model of evapotranspiration based on the theory of maximum entropy production. Water Resources Research, 47. Xue, Y. & Cracknell, A.P. (1995). Advanced thermal inertia modeling. International Journal of Remote Sensing, 16, 431-446.
The effect of different control point sampling sequences on convergence of VMAT inverse planning
NASA Astrophysics Data System (ADS)
Pardo Montero, Juan; Fenwick, John D.
2011-04-01
A key component of some volumetric-modulated arc therapy (VMAT) optimization algorithms is the progressive addition of control points to the optimization. This idea was introduced in Otto's seminal VMAT paper, in which a coarse sampling of control points was used at the beginning of the optimization and new control points were progressively added one at a time. A different form of the methodology is also present in the RapidArc optimizer, which adds new control points in groups called 'multiresolution levels', each doubling the number of control points in the optimization. This progressive sampling accelerates convergence, improving the results obtained, and has similarities with the ordered subset algorithm used to accelerate iterative image reconstruction. In this work we have used a VMAT optimizer developed in-house to study the performance of optimization algorithms which use different control point sampling sequences, most of which fall into three different classes: doubling sequences, which add new control points in groups such that the number of control points in the optimization is (roughly) doubled; Otto-like progressive sampling which adds one control point at a time, and equi-length sequences which contain several multiresolution levels each with the same number of control points. Results are presented in this study for two clinical geometries, prostate and head-and-neck treatments. A dependence of the quality of the final solution on the number of starting control points has been observed, in agreement with previous works. We have found that some sequences, especially E20 and E30 (equi-length sequences with 20 and 30 multiresolution levels, respectively), generate better results than a 5 multiresolution level RapidArc-like sequence. The final value of the cost function is reduced up to 20%, such reductions leading to small improvements in dosimetric parameters characterizing the treatments—slightly more homogeneous target doses and better sparing of the organs at risk.
A multiresolution prostate representation for automatic segmentation in magnetic resonance images.
Alvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2017-04-01
Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical routine, as supporting tool for annotation. © 2017 American Association of Physicists in Medicine.
SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.
Li, Dengwang; Yin, Yong
2012-06-01
In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.
Generative complexity of Gray-Scott model
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2018-03-01
In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).
Hoyos-Leyva, Javier D; Bello-Pérez, Luis A; Alvarez-Ramirez, J
2018-09-01
Spherical aggregates can be obtained from taro starch by spray-drying without using bonding agents. Accurate information about thermal issues of spherical aggregates can provide valuable information for assessing the application as encapsulant. Spherical aggregates of taro starch were obtained by spray-drying and analyzed using dynamic vapour sorption. The use of the Guggenheim, Anderson and de Boer (GAB) model indicated a Type II isotherm pattern with weaker interactions in the multilayer region. Differential enthalpy and entropy estimates reflected a mesoporous microstructure, implying that energetic mechanisms dominate over transport mechanisms in the sorption process. The limitation by energetic mechanisms was corroborated with enthalpy-entropy compensation estimates. The diffusivity coefficient was of the order of 10 -8 m 2 ·s -1 , which is in line with results obtained for common materials used for encapsulation purposes. The thermodynamic properties and the lack of a bonding agent indicated the viability of spherical aggregates of taro starch for encapsulation of biocompounds. Copyright © 2018 Elsevier Ltd. All rights reserved.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Vellela, Melissa; Qian, Hong
2009-10-06
Schlögl's model is the canonical example of a chemical reaction system that exhibits bistability. Because the biological examples of bistability and switching behaviour are increasingly numerous, this paper presents an integrated deterministic, stochastic and thermodynamic analysis of the model. After a brief review of the deterministic and stochastic modelling frameworks, the concepts of chemical and mathematical detailed balances are discussed and non-equilibrium conditions are shown to be necessary for bistability. Thermodynamic quantities such as the flux, chemical potential and entropy production rate are defined and compared across the two models. In the bistable region, the stochastic model exhibits an exchange of the global stability between the two stable states under changes in the pump parameters and volume size. The stochastic entropy production rate shows a sharp transition that mirrors this exchange. A new hybrid model that includes continuous diffusion and discrete jumps is suggested to deal with the multiscale dynamics of the bistable system. Accurate approximations of the exponentially small eigenvalue associated with the time scale of this switching and the full time-dependent solution are calculated using Matlab. A breakdown of previously known asymptotic approximations on small volume scales is observed through comparison with these and Monte Carlo results. Finally, in the appendix section is an illustration of how the diffusion approximation of the chemical master equation can fail to represent correctly the mesoscopically interesting steady-state behaviour of the system.
NASA Astrophysics Data System (ADS)
Lalneihpuii, R.; Shrivastava, Ruchi; Mishra, Raj Kumar
2018-05-01
Using statistical mechanical model with square-well (SW) interatomic potential within the frame work of mean spherical approximation, we determine the composition dependent microscopic correlation functions, interdiffusion coefficients, surface tension and chemical ordering in Ag-Cu melts. Further Dzugutov universal scaling law of normalized diffusion is verified with SW potential in binary mixtures. We find that the excess entropy scaling law is valid for SW binary melts. The partial and total structure factors in the attractive and repulsive regions of the interacting potential are evaluated and then Fourier transformed to get partial and total radial distribution functions. A good agreement between theoretical and experimental values for total structure factor and the reduced radial distribution function are observed, which consolidates our model calculations. The well-known Bhatia-Thornton correlation functions are also computed for Ag-Cu melts. The concentration-concentration correlations in the long wavelength limit in liquid Ag-Cu alloys have been analytically derived through the long wavelength limit of partial correlation functions and apply it to demonstrate the chemical ordering and interdiffusion coefficients in binary liquid alloys. We also investigate the concentration dependent viscosity coefficients and surface tension using the computed diffusion data in these alloys. Our computed results for structure, transport and surface properties of liquid Ag-Cu alloys obtained with square-well interatomic interaction are fully consistent with their corresponding experimental values.
Thermodynamic framework for compact q-Gaussian distributions
NASA Astrophysics Data System (ADS)
Souza, Andre M. C.; Andrade, Roberto F. S.; Nobre, Fernando D.; Curado, Evaldo M. F.
2018-02-01
Recent works have associated systems of particles, characterized by short-range repulsive interactions and evolving under overdamped motion, to a nonlinear Fokker-Planck equation within the class of nonextensive statistical mechanics, with a nonlinear diffusion contribution whose exponent is given by ν = 2 - q. The particular case ν = 2 applies to interacting vortices in type-II superconductors, whereas ν > 2 covers systems of particles characterized by short-range power-law interactions, where correlations among particles are taken into account. In the former case, several studies presented a consistent thermodynamic framework based on the definition of an effective temperature θ (presenting experimental values much higher than typical room temperatures T, so that thermal noise could be neglected), conjugated to a generalized entropy sν (with ν = 2). Herein, the whole thermodynamic scheme is revisited and extended to systems of particles interacting repulsively, through short-ranged potentials, described by an entropy sν, with ν > 1, covering the ν = 2 (vortices in type-II superconductors) and ν > 2 (short-range power-law interactions) physical examples. One basic requirement concerns a cutoff in the equilibrium distribution Peq(x) , approached due to a confining external harmonic potential, ϕ(x) = αx2 / 2 (α > 0). The main results achieved are: (a) The definition of an effective temperature θ conjugated to the entropy sν; (b) The construction of a Carnot cycle, whose efficiency is shown to be η = 1 -(θ2 /θ1) , where θ1 and θ2 are the effective temperatures associated with two isothermal transformations, with θ1 >θ2; (c) Thermodynamic potentials, Maxwell relations, and response functions. The present thermodynamic framework, for a system of interacting particles under the above-mentioned conditions, and associated to an entropy sν, with ν > 1, certainly enlarges the possibility of experimental verifications.
Gharekhan, Anita H; Arora, Siddharth; Oza, Ashok N; Sureshkumar, Mundan B; Pradhan, Asima; Panigrahi, Prasanta K
2011-08-01
Using the multiresolution ability of wavelets and effectiveness of singular value decomposition (SVD) to identify statistically robust parameters, we find a number of local and global features, capturing spectral correlations in the co- and cross-polarized channels, at different scales (of human breast tissues). The copolarized component, being sensitive to intrinsic fluorescence, shows different behavior for normal, benign, and cancerous tissues, in the emission domain of known fluorophores, whereas the perpendicular component, being more prone to the diffusive effect of scattering, points out differences in the Kernel-Smoother density estimate employed to the principal components, between malignant, normal, and benign tissues. The eigenvectors, corresponding to the dominant eigenvalues of the correlation matrix in SVD, also exhibit significant differences between the three tissue types, which clearly reflects the differences in the spectral correlation behavior. Interestingly, the most significant distinguishing feature manifests in the perpendicular component, corresponding to porphyrin emission range in the cancerous tissue. The fact that perpendicular component is strongly influenced by depolarization, and porphyrin emissions in cancerous tissue has been found to be strongly depolarized, may be the possible cause of the above observation.
Towards Online Multiresolution Community Detection in Large-Scale Networks
Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim
2011-01-01
The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325
A qualitative multiresolution model for counterterrorism
NASA Astrophysics Data System (ADS)
Davis, Paul K.
2006-05-01
This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1988-01-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusion into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tadmor, E.
1988-07-01
A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusionmore » into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).« less
Umanodan, Tomokazu; Fukukura, Yoshihiko; Kumagae, Yuichi; Shindo, Toshikazu; Nakajo, Masatoyo; Takumi, Koji; Nakajo, Masanori; Hakamada, Hiroto; Umanodan, Aya; Yoshiura, Takashi
2017-04-01
To determine the diagnostic performance of apparent diffusion coefficient (ADC) histogram analysis in diffusion-weighted (DW) magnetic resonance imaging (MRI) for differentiating adrenal adenoma from pheochromocytoma. We retrospectively evaluated 52 adrenal tumors (39 adenomas and 13 pheochromocytomas) in 47 patients (21 men, 26 women; mean age, 59.3 years; range, 16-86 years) who underwent DW 3.0T MRI. Histogram parameters of ADC (b-values of 0 and 200 [ADC 200 ], 0 and 400 [ADC 400 ], and 0 and 800 s/mm 2 [ADC 800 ])-mean, variance, coefficient of variation (CV), kurtosis, skewness, and entropy-were compared between adrenal adenomas and pheochromocytomas, using the Mann-Whitney U-test. Receiver operating characteristic (ROC) curves for the histogram parameters were generated to differentiate adrenal adenomas from pheochromocytomas. Sensitivity and specificity were calculated by using a threshold criterion that would maximize the average of sensitivity and specificity. Variance and CV of ADC 800 were significantly higher in pheochromocytomas than in adrenal adenomas (P < 0.001 and P = 0.001, respectively). With all b-value combinations, the entropy of ADC was significantly higher in pheochromocytomas than in adrenal adenomas (all P ≤ 0.001), and showed the highest area under the ROC curve among the ADC histogram parameters for diagnosing adrenal adenomas (ADC 200 , 0.82; ADC 400 , 0.87; and ADC 800 , 0.92), with sensitivity of 84.6% and specificity of 84.6% (cutoff, ≤2.82) with ADC 200 ; sensitivity of 89.7% and specificity of 84.6% (cutoff, ≤2.77) with ADC 400 ; and sensitivity of 94.9% and specificity of 92.3% (cutoff, ≤2.67) with ADC 800 . ADC histogram analysis of DW MRI can help differentiate adrenal adenoma from pheochromocytoma. 3 J. Magn. Reson. Imaging 2017;45:1195-1203. © 2016 International Society for Magnetic Resonance in Medicine.
Wave theory of turbulence in compressible media
NASA Technical Reports Server (NTRS)
Kentzer, C. P.
1975-01-01
An acoustical theory of turbulence was developed to aid in the study of the generation of sound in turbulent flows. The statistical framework adopted is a quantum-like wave dynamical formulation in terms of complex distribution functions. This formulation results in nonlinear diffusion-type transport equations for the probability densities of the five modes of wave propagation: two vorticity modes, one entropy mode, and two acoustic modes. This system of nonlinear equations is closed and complete. The technique of analysis was chosen such that direct applications to practical problems can be obtained with relative ease.
2012-09-01
scanner. Report of the Progress: Multi-slice DWI-MRI and 4D EP-COSI was tested in 2 malignant and 3 benign breast cancer patients and 6 healthy...for improving the overall specificity. • We are currently testing retrospective Maximum Entropy and Compressed Sensing of the 4D EP-COSI data so that...MRS. NMR in Biomed. 2008;22(1):77-91. 2 Kobus T, Vos PC, Hambrock T, De Rooij M, Hulsbergen-Van de Kaa CA, Barentsz JO, Heerschap A, Scheenen TW
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
Multisensor multiresolution data fusion for improvement in classification
NASA Astrophysics Data System (ADS)
Rubeena, V.; Tiwari, K. C.
2016-04-01
The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.
Multi-resolution model-based traffic sign detection and tracking
NASA Astrophysics Data System (ADS)
Marinas, Javier; Salgado, Luis; Camplani, Massimo
2012-06-01
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M
2014-10-01
Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
Temperature Dependence of Nonelectrolyte Permeation across Red Cell Membranes
Galey, W. R.; Owen, J. D.; Solomon, A. K.
1973-01-01
The temperature dependence of permeation across human red cell membranes has been determined for a series of hydrophilic and lipophilic solutes, including urea and two methyl substituted derivatives, all the straight-chain amides from formamide through valeramide and the two isomers, isobutyramide and isovaleramide. The temperature coefficient for permeation by all the hydrophilic solutes is 12 kcal mol-1 or less, whereas that for all the lipophilic solutes is 19 kcal mol-1 or greater. This difference is consonant with the view that hydrophilic molecules cross the membrane by a path different from that taken by the lipophilic ones. The thermodynamic parameters associated with lipophile permeation have been studied in detail. ΔG is negative for adsorption of lipophilic amides onto an oil-water interface, whereas it is positive for transfer of the polar head from the aqueous medium to bulk lipid solvent. Application of absolute reaction rate theory makes it possible to make a clear distinction between diffusion across the water-red cell membrane interface and diffusion within the membrane. Diffusion coefficients and apparent activation enthalpies and entropies have been computed for each process. Transfer of the polar head from the solvent into the interface is characterized by ΔG ‡ = 0 kcal mol-1 and ΔS ‡ negative, whereas both of these parameters have large positive values for diffusion within the membrane. Diffusion within the membrane is similar to what is expected for diffusion through a highly associated viscous fluid. PMID:4708405
NASA Astrophysics Data System (ADS)
Leow, Alex D.; Zhu, Siwei
2008-03-01
Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.
Stokes-Einstein relation in liquid iron-nickel alloy up to 300 GPa
NASA Astrophysics Data System (ADS)
Cao, Q.-L.; Wang, P.-P.
2017-05-01
Molecular dynamic simulations were applied to investigate the Stokes-Einstein relation (SER) and the Rosenfeld entropy scaling law (ESL) in liquid Fe0.9Ni0.1 over a sufficiently broad range of temperatures (0.70 < T/Tm < 1.85 Tm is melting temperature) and pressures (from 50 GPa to 300 GPa). Our results suggest that the SER and ESL hold well in the normal liquid region and break down in the supercooled region under high-pressure conditions, and the deviation becomes larger with decreasing temperature. In other words, the SER can be used to calculate the viscosity of liquid Earth's outer core from the self-diffusion coefficients of iron/nickel and the ESL can be used to predict the viscosity and diffusion coefficients of liquid Earth's outer core form its structural properties. In addition, the pressure dependence of effective diameters cannot be ignored in the course of using the SER. Moreover, ESL provides a useful, structure-based probe for the validity of SER, while the ratio of the self-diffusion coefficients of the components cannot be used as a probe for the validity of SER.
NASA Astrophysics Data System (ADS)
Pacaud, F.; Micoulaut, M.
2015-08-01
The thermodynamic, dynamic, structural, and rigidity properties of densified liquid germania (GeO2) have been investigated using classical molecular dynamics simulation. We construct from a thermodynamic framework an analytical equation of state for the liquid allowing the possible detection of thermodynamic precursors (extrema of the derivatives of the free energy), which usually indicate the possibility of a liquid-liquid transition. It is found that for the present germania system, such precursors and the possible underlying liquid-liquid transition are hidden by the slowing down of the dynamics with decreasing temperature. In this respect, germania behaves quite differently when compared to parent tetrahedral systems such as silica or water. We then detect a diffusivity anomaly (a maximum of diffusion with changing density/volume) that is strongly correlated with changes in coordinated species, and the softening of bond-bending (BB) topological constraints that decrease the liquid rigidity and enhance transport. The diffusivity anomaly is finally substantiated from a Rosenfeld-type scaling law linked to the pair correlation entropy, and to structural relaxation.
NASA Astrophysics Data System (ADS)
Zhu, Aichun; Wang, Tian; Snoussi, Hichem
2018-03-01
This paper addresses the problems of the graphical-based human pose estimation in still images, including the diversity of appearances and confounding background clutter. We present a new architecture for estimating human pose using a Convolutional Neural Network (CNN). Firstly, a Relative Mixture Deformable Model (RMDM) is defined by each pair of connected parts to compute the relative spatial information in the graphical model. Secondly, a Local Multi-Resolution Convolutional Neural Network (LMR-CNN) is proposed to train and learn the multi-scale representation of each body parts by combining different levels of part context. Thirdly, a LMR-CNN based hierarchical model is defined to explore the context information of limb parts. Finally, the experimental results demonstrate the effectiveness of the proposed deep learning approach for human pose estimation.
Lee, Wen-Li; Chang, Koyin; Hsieh, Kai-Sheng
2016-09-01
Segmenting lung fields in a chest radiograph is essential for automatically analyzing an image. We present an unsupervised method based on multiresolution fractal feature vector. The feature vector characterizes the lung field region effectively. A fuzzy c-means clustering algorithm is then applied to obtain a satisfactory initial contour. The final contour is obtained by deformable models. The results show the feasibility and high performance of the proposed method. Furthermore, based on the segmentation of lung fields, the cardiothoracic ratio (CTR) can be measured. The CTR is a simple index for evaluating cardiac hypertrophy. After identifying a suspicious symptom based on the estimated CTR, a physician can suggest that the patient undergoes additional extensive tests before a treatment plan is finalized.
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Inquiries into the Nature of Free Energy and Entropy in Respect to Biochemical Thermodynamics
NASA Astrophysics Data System (ADS)
Stoner, Clinton D.
2000-09-01
Free energy and entropy are examined in detail from the standpoint of classical thermodynamics. The approach is logically based on the fact that thermodynamic work is mediated by thermal energy through the tendency for nonthermal energy to convert spontaneously into thermal energy and for thermal energy to distribute spontaneously and uniformly within the accessible space. The fact that free energy is a Second-Law, expendable energy that makes it possible for thermodynamic work to be done at finite rates is emphasized. Entropy, as originally defined, is pointed out to be the capacity factor for thermal energy that is hidden with respect to temperature; it serves to evaluate the practical quality of thermal energy and to account for changes in the amounts of latent thermal energies in systems maintained at constant temperature. With entropy thus operationally defined, it is possible to see that TDS° of the Gibbs standard free energy relation DG°= DH°-TDS° serves to account for differences or changes in nonthermal energies that do not contribute to DG° and that, since DH° serves to account for differences or changes in total energy, complete enthalpy-entropy (DH° - TDS°) compensation must invariably occur in isothermal processes for which TDS° is finite. A major objective was to clarify the means by which free energy is transferred and conserved in sequences of biological reactions coupled by freely diffusible intermediates. In achieving this objective it was found necessary to distinguish between a 'characteristic free energy' possessed by all First-Law energies in amounts equivalent to the amounts of the energies themselves and a 'free energy of concentration' that is intrinsically mechanical and relatively elusive in that it can appear to be free of First-Law energy. The findings in this regard serve to clarify the fact that the transfer of chemical potential energy from one repository to another along sequences of biological reactions of the above sort occurs through transfer of the First-Law energy as thermal energy and transfer of the Second-Law energy as free energy of concentration.
Chakraborty, Brahmananda
2015-08-20
Applying Green-Kubo formalism and equilibrium molecular dynamics (MD) simulations, we have studied for the first time the dynamic correlation, Onsager coefficients, and Maxwell-Stefan (MS) diffusivities of molten salt LiF-BeF2, which is a potential candidate for a coolant in a high temperature reactor. We observe an unusual composition dependence and strikingly a crossover in sign for all the MS diffusivities at a composition of around 7% of LiF where the MS diffusivity between cation-anion pair (Đ(BeF) and Đ(LiF)) jumps from positive to negative value while the MS diffusivity between cation-cation pair (Đ(LiBe)) becomes positive from a negative value. Even though the negative MS diffusivities have been observed for electrolyte solutions between cation-cation pair, here we report negative MS diffusivity between cation-anion pair where Đ(BeF) shows a sharp rise around 66% of BeF2, reaches maximum value at 70% of BeF2, and then decreases almost exponentially with a sign change for BeF2 around 93%. For low mole fraction of LiF, Đ(BeF) follows the Debye-Huckel theory and rises with the square root of LiF mole fraction similar to the MS diffusivity between cation-anion pair in aqueous solution of electrolyte salt. Negative MS diffusivities while unusual are, however, shown to satisfy the non-negative entropy constraints at all thermodynamic states as required by the second law of thermodynamics. We have established a strong correlation between the structure and dynamics and predict that the formation of flouride polyanion network between Be and F ions and coulomb interaction is responsible for sharp variation of the MS diffusivities which controls the multicomponent diffusion phenomenon in LiF-BeF2 which has a strong impact on the performance of the reactor.
Ray Casting of Large Multi-Resolution Volume Datasets
NASA Astrophysics Data System (ADS)
Lux, C.; Fröhlich, B.
2009-04-01
High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.
Nicotine-selective radiation-induced poly(acrylamide/maleic acid) hydrogels
NASA Astrophysics Data System (ADS)
Saraydin, D.; Karadağ, E.; Çaldiran, Y.; Güven, O.
2001-02-01
Nicotine-selective poly(acrylamide/maleic acid) (AAm/MA) hydrogels prepared by γ-irradiation were used in experiments on swelling, diffusion, and interactions of the pharmaceuticals nicotine, nicotinic acid, nicotinamide, and nikethamide. For AAm/MA hydrogel containing 60 mg maleic acid and irradiated at 5.2 kGy, the studies indicated that swelling increased in the following order; nicotine>nicotinamide>nikethamide>nicotinic acid>water. Diffusions of water and the pharmaceuticals within the hydrogels were found to be non-Fickian in character. AAm/MA hydrogel sorbed only nicotine and did not sorb nicotinamide, nikethamide and nicotinic acid in the binding experiments. S-type adsorption in Giles's classification system was observed. Some binding and thermodynamic parameters for AAm/MA hydrogel-nicotine system were calculated using the Scatchard method. The values of adsorption heat and free energy of this system were found to be negative whereas adsorption entropy was found to be positive.
Diffusion of innovations dynamics, biological growth and catenary function
NASA Astrophysics Data System (ADS)
Guseo, Renato
2016-12-01
The catenary function has a well-known role in determining the shape of chains and cables supported at their ends under the force of gravity. This enables design using a specific static equilibrium over space. Its reflected version, the catenary arch, allows the construction of bridges and arches exploiting the dual equilibrium property under uniform compression. In this paper, we emphasize a further connection with well-known aggregate biological growth models over time and the related diffusion of innovation key paradigms (e.g., logistic and Bass distributions over time) that determine self-sustaining evolutionary growth dynamics in naturalistic and socio-economic contexts. Moreover, we prove that the 'local entropy function', related to a logistic distribution, is a catenary and vice versa. This special invariance may be explained, at a deeper level, through the Verlinde's conjecture on the origin of gravity as an effect of the entropic force.
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
High-Order Residual-Distribution Schemes for Discontinuous Problems on Irregular Triangular Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2016-01-01
In this paper, we develop second- and third-order non-oscillatory shock-capturing hyperbolic residual distribution schemes for irregular triangular grids, extending our second- and third-order schemes to discontinuous problems. We present extended first-order N- and Rusanov-scheme formulations for hyperbolic advection-diffusion system, and demonstrate that the hyperbolic diffusion term does not affect the solution of inviscid problems for vanishingly small viscous coefficient. We then propose second- and third-order blended hyperbolic residual-distribution schemes with the extended first-order Rusanov-scheme. We show that these proposed schemes are extremely accurate in predicting non-oscillatory solutions for discontinuous problems. We also propose a characteristics-based nonlinear wave sensor for accurately detecting shocks, compression, and expansion regions. Using this proposed sensor, we demonstrate that the developed hyperbolic blended schemes do not produce entropy-violating solutions (unphysical stocks). We then verify the design order of accuracy of these blended schemes on irregular triangular grids.
Universal scaling laws of diffusion in two-dimensional granular liquids.
Wang, Chen-Hung; Yu, Szu-Hsuan; Chen, Peilong
2015-06-01
We find, in a two-dimensional air table granular system, that the reduced diffusion constant D* and excess entropy S(2) follow two distinct scaling laws: D*∼e(S(2)*) for dense liquids and D∼e(3S(2)*) for dilute ones. The scaling for dense liquids is very similar to that for three-dimensional liquids proposed previously [M. Dzugutov, Nature (London) 381, 137 (1996); A. Samanta et al., Phys. Rev. Lett. 92, 145901 (2004)]. In the dilute regime, a power law [Y. Rosenfeld, J. Phys.: Condens. Matter 11, 5415 (1999)] also fits our data reasonably well. In our system, particles experience low air drag dissipation and interact with each others through embedded magnets. These near-conservative many-body interactions are responsible for the measured Gaussian velocity distribution functions and the scaling laws. The dominance of cage relaxations in dense liquids leads to the different scaling laws for dense and dilute regimes.
An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene M.; Drela, Mark
1993-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.
Silica Glass Fibers : Modes Of Degradation And Thoughts On Protection
NASA Astrophysics Data System (ADS)
Kruger, Albert A.; Mularie, William M.
1984-03-01
The widely held explanation for mechanical failure of silicate glasses rests upon the existence of Griffith-flaw and the associated free-ion diffusion concept used to model crack growth. However, this theory has consistently failed to provide complete agreement with the experimental results known to those "schooled" in the poignant literature. This dilemma coupled with the reports of single-valued strengths in fibers cannot be rationalized by the modification of the intrinsic Griffith-flaw distribution to essentially a delta function (this violates entropy). It is for these reasons that the field-enhanced ion diffusion model has been introduced. The inclusion of a term for electrostatic potential in the solution of Fick's second law is shown to be consistent with the experimental results in the existing literature. The results of the work presented herein provide further support of the proposed model, and the implied consequences of chemical corrosion in glass which results in its subsequent failure.
NASA Astrophysics Data System (ADS)
Sridhar, S.; Touma, Jihad R.
2017-02-01
We study the resonant relaxation (RR) of an axisymmetric, low-mass (or Keplerian) stellar disc orbiting a more massive black hole (MBH). Our recent work on the general kinetic theory of RR is simplified in the standard manner by the neglect of 'gravitational polarization' and applied to a razor-thin axisymmetric disc. The wake of a stellar orbit is expressed in terms of the angular momenta exchanged with other orbits, and used to derive a kinetic equation for RR under the combined actions of self-gravity, 1 PN and 1.5 PN general relativistic effects of the MBH and an arbitrary external axisymmetric potential. This is a Fokker-Planck equation for the stellar distribution function (DF), wherein the diffusion coefficients are given self-consistently in terms of contributions from apsidal resonances between pairs of stellar orbits. The physical kinetics is studied for the two main cases of interest. (1) 'Lossless' discs in which the MBH is not a sink of stars, and disc mass, angular momentum and energy are conserved: we prove that general H-functions can increase or decrease during RR, but the Boltzmann entropy is (essentially) unique in being a non-decreasing function of time. Therefore, secular thermal equilibria are maximum entropy states, with DFs of the Boltzmann form; the two-ring correlation function at equilibrium is computed. (2) Discs that lose stars to the MBH through an 'empty loss cone': we derive expressions for the MBH feeding rates of mass, angular momentum and energy in terms of the diffusive fluxes at the loss-cone boundaries.
Universal bounds on current fluctuations.
Pietzonka, Patrick; Barato, Andre C; Seifert, Udo
2016-05-01
For current fluctuations in nonequilibrium steady states of Markovian processes, we derive four different universal bounds valid beyond the Gaussian regime. Different variants of these bounds apply to either the entropy change or any individual current, e.g., the rate of substrate consumption in a chemical reaction or the electron current in an electronic device. The bounds vary with respect to their degree of universality and tightness. A universal parabolic bound on the generating function of an arbitrary current depends solely on the average entropy production. A second, stronger bound requires knowledge both of the thermodynamic forces that drive the system and of the topology of the network of states. These two bounds are conjectures based on extensive numerics. An exponential bound that depends only on the average entropy production and the average number of transitions per time is rigorously proved. This bound has no obvious relation to the parabolic bound but it is typically tighter further away from equilibrium. An asymptotic bound that depends on the specific transition rates and becomes tight for large fluctuations is also derived. This bound allows for the prediction of the asymptotic growth of the generating function. Even though our results are restricted to networks with a finite number of states, we show that the parabolic bound is also valid for three paradigmatic examples of driven diffusive systems for which the generating function can be calculated using the additivity principle. Our bounds provide a general class of constraints for nonequilibrium systems.
NASA Astrophysics Data System (ADS)
Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue
2016-03-01
During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1998-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1997-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Automated transformation-invariant shape recognition through wavelet multiresolution
NASA Astrophysics Data System (ADS)
Brault, Patrice; Mounier, Hugues
2001-12-01
We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.
Physicochemical application of capillary chromatography
NASA Astrophysics Data System (ADS)
Vasil'ev, A. V.; Aleksandrov, E. N.
1992-04-01
The application of capillary gas chromatography in the determination of the free energy, enthalpy, and entropy of sorption, the saturated vapour pressure and activity coefficients, the assessment of the lipophilicity of volatile compounds, and the study of the properties of polymers and liquid crystals is described. The use of reaction cappillary chromatography in kinetic studies of conformational conversions, thermal degradation, and photochemical reactions is examined. Studies on the use of capillary columns for determination of the second virial coefficients and viscosity of gases and the diffusion coefficients in gases, liquids, supercritical fluids, and polymers are analysed. The bibliography includes 114 references.
Effects of ionizing radiations on a pharmaceutical compound, chloramphenicol
NASA Astrophysics Data System (ADS)
Varshney, L.; Patel, K. M.
1994-05-01
Chloramphenicol, a broad spectrum antibiotic, has been irradiated using Cobalt-60 γ radiation and electron beam at graded radiation doses upto 100 kGy. Several degradation products and free radicals are formed on irradiation. Purity, degradation products, free radicals, discolouration, crystallinity, solubility and entropy of radiation processing have been investigated. Aqueous solutions undergo extensive radiolysis even at low doses. Physico-chemical, microbiological and toxicological tests do not show significant degradation at sterilization dose. High performance liquid chromatography (HPLC), differential scanning calorimetry (DSC), UV-spectrophotometry, diffuse reflectance spectroscopy (DRS) and electron spin resonance spectroscopy (ESR) techniques were employed for the investigations.
Kinetics and thermodynamics associated with Bi adsorption transitions at Cu and Ni grain boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tai, Kaiping; Feng, Lin; Dillon, Shen J.
The grain boundary diffusivity of Au in Cu and Cu-Bi, and Cu in Ni and Ni-Bi are characterized by secondary ion mass spectroscopy depth profiling. Samples are equilibrated in a Bi containing atmosphere at temperatures above and below the onset of grain boundary adsorption transitions, sometimes called complexion transitions. A simple thermo-kinetic model is used to estimate the relative entropic contributions to the grain boundary energies. The results indicate that the entropy term plays a major role in promoting thermally and chemically induced grain boundary complexion transition.
The Observed Properties of Liquid Helium at the Saturated Vapor Pressure
NASA Astrophysics Data System (ADS)
Donnelly, Russell J.; Barenghi, Carlo F.
1998-11-01
The equilibrium and transport properties of liquid 4He are deduced from experimental observations at the saturated vapor pressure. In each case, the bibliography lists all known measurements. Quantities reported here include density, thermal expansion coefficient, dielectric constant, superfluid and normal fluid densities, first, second, third, and fourth sound velocities, specific heat, enthalpy, entropy, surface tension, ion mobilities, mutual friction, viscosity and kinematic viscosity, dispersion curve, structure factor, thermal conductivity, latent heat, saturated vapor pressure, thermal diffusivity and Prandtl number of helium I, and displacement length and vortex core parameter in helium II.
NASA Technical Reports Server (NTRS)
Mccarty, R. D.; Weber, L. A.
1972-01-01
The tables include entropy, enthalpy, internal energy, density, volume, speed of sound, specific heat, thermal conductivity, viscosity, thermal diffusivity, Prandtl number, and the dielectric constant for 65 isobars. Quantities of special utility in heat transfer and thermodynamic calculations are also included in the isobaric tables. In addition to the isobaric tables, tables for the saturated vapor and liquid are given, which include all of the above properties, plus the surface tension. Tables for the P-T of the freezing liquid, index of refraction, and the derived Joule-Thomson inversion curve are also presented.
A Novel Bit-level Image Encryption Method Based on Chaotic Map and Dynamic Grouping
NASA Astrophysics Data System (ADS)
Zhang, Guo-Ji; Shen, Yan
2012-10-01
In this paper, a novel bit-level image encryption method based on dynamic grouping is proposed. In the proposed method, the plain-image is divided into several groups randomly, then permutation-diffusion process on bit level is carried out. The keystream generated by logistic map is related to the plain-image, which confuses the relationship between the plain-image and the cipher-image. The computer simulation results of statistical analysis, information entropy analysis and sensitivity analysis show that the proposed encryption method is secure and reliable enough to be used for communication application.
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Application of particle splitting method for both hydrostatic and hydrodynamic cases in SPH
NASA Astrophysics Data System (ADS)
Liu, W. T.; Sun, P. N.; Ming, F. R.; Zhang, A. M.
2018-01-01
Smoothed particle hydrodynamics (SPH) method with numerical diffusive terms shows satisfactory stability and accuracy in some violent fluid-solid interaction problems. However, in most simulations, uniform particle distributions are used and the multi-resolution, which can obviously improve the local accuracy and the overall computational efficiency, has seldom been applied. In this paper, a dynamic particle splitting method is applied and it allows for the simulation of both hydrostatic and hydrodynamic problems. The splitting algorithm is that, when a coarse (mother) particle enters the splitting region, it will be split into four daughter particles, which inherit the physical parameters of the mother particle. In the particle splitting process, conservations of mass, momentum and energy are ensured. Based on the error analysis, the splitting technique is designed to allow the optimal accuracy at the interface between the coarse and refined particles and this is particularly important in the simulation of hydrostatic cases. Finally, the scheme is validated by five basic cases, which demonstrate that the present SPH model with a particle splitting technique is of high accuracy and efficiency and is capable for the simulation of a wide range of hydrodynamic problems.
Simulating shock-bubble interactions at water-gelatin interfaces
NASA Astrophysics Data System (ADS)
Adami, Stefan; Kaiser, Jakob; Bermejo-Moreno, Ivan; Adams, Nikolaus
2016-11-01
Biomedical problems are often driven by fluid dynamics, as in vivo organisms are usually composed of or filled with fluids that (strongly) affected their physics. Additionally, fluid dynamical effects can be used to enhance certain phenomena or destroy organisms. As examples, we highlight the benign potential of shockwave-driven kidney-stone lithotripsy or sonoporation (acoustic cavitation of microbubbles) to improve drug delivery into cells. During the CTR SummerProgram 2016 we have performed axisymmetric three-phase simulations of a shock hitting a gas bubble in water near a gelatin interface mimicking the fundamental process during sonoporation. We used our multi-resolution finite volume method with sharp interface representation (level-set), WENO-5 shock capturing and interface scale-separation and compared the results with a diffuse-interface method. Qualitatively our simulation results agree well with the reference. Due to the interface treatment the pressure profiles are sharper in our simulations and bubble collapse dynamics are predicted at shorter time-scales. Validation with free-field collapse (Rayleigh collapse) shows very good agreement. The project leading to this application has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No 667483).
Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!
NASA Astrophysics Data System (ADS)
Lambert, Frank L.
1999-10-01
The order of presentation in this article is unusual; its conclusion is first. This is done because the title entails text and lecture examples so familiar to all teachers that most may find a preliminary discussion redundant. Conclusion The dealer shuffling cards in Monte Carlo or Las Vegas, the professor who mixes the papers and books on a desk, the student who tosses clothing about his or her room, the fuel for the huge cranes and trucks that would be necessary to move the nonbonded stones of the Great Pyramid of Cheops all across Egypteach undergoes physical, thermodynamic entropy increase in these specific processes. The thermodynamic entropy change from human-defined order to disorder in the giant Egyptian stones themselves, in the clothing and books in a room or papers on a desk, and in the millions of cards in the world's casinos is precisely the same: Zero. K. G. Denbigh succinctly summarizes the case against identifying changes in position in one macro object or in a group with physical entropy change (1): If one wishes to substantiate a claim or a guess that some particular process involves a change of thermodynamic or statistical entropy, one should ask oneself whether there exists a reversible heat effect, or a change in the number of accessible energy eigenstates, pertaining to the process in question. If not, there has been no change of physical entropy (even though there may have been some change in our "information"). Thus, simply changing the location of everyday macro objects from an arrangement that we commonly judge as orderly (relatively singular) to one that appears disorderly (relatively probable) is a "zero change" in the thermodynamic entropy of the objects because the number of accessible energetic microstates in any of them has not been changed. Finally, although it may appear obvious, a collection of ordinary macro things does not constitute a thermodynamic system as does a group of microparticles. The crucial difference is that such things are not ceaselessly colliding and exchanging energy under the thermal dominance of their environment as are microparticles. A postulate can be derived from this fundamental criterion: The movement of macro objects from one location to another by an external agent involves no change in the objects' physical (thermodynamic) entropy. The agent of movement undergoes a thermodynamic entropy increase in the process. A needed corollary, considering the number of erroneous statements in print, is: There is no spontaneous tendency in groups of macro objects to become disorderly or randomly scattered. The tendency in nature toward increased entropy does not reside in the arrangement of any chemically unchanging objects but rather in the external agent moving them. It is the sole cause of their transport toward more probable locations. The Error There is no more widespread error in chemistry and physics texts than the identification of a thermodynamic entropy increase with a change in the pattern of a group of macro objects. The classic example is that of playing cards. Shuffling a new deck is widely said to result in an increase in entropy in the cards. This erroneous impression is often extended to all kinds of things when they are changed from humanly designated order to what is commonly considered disorder: a group of marbles to scattered marbles, racked billiard balls to a broken rack, neat groups of papers on a desk to the more usual disarray. In fact, there is no thermodynamic entropy change in the objects in the "after" state compared to the "before". Further, such alterations in arrangement have been used in at least one text to support a "law" that is stated, "things move spontaneously in the direction of maximum chaos or disorder".1 The foregoing examples and "law" seriously mislead the student by focusing on macro objects that are only a passive part of a system. They are deceptive in omitting the agent that actually is changed in entropy as it follows the second lawthat is, whatever energy source is involved in the process of moving the static macro objects to more probable random locations. Entropy is increased in the shuffler's and in the billiard cue holder's muscles, in the tornado's wind and the earthquake's stressnot in the objects shifted. Chemically unchanged macro things do not spontaneously, by some innate tendency, leap or even slowly lurch toward visible disorder. Energy concentrated in the ATP of a person's muscles or in wind or in earth-stress is ultimately responsible for moving objects and is partly degraded to diffuse thermal energy as a result. Discussion To discover the origin of this text and lecture error, a brief review of some aspects of physical entropy is useful. Of course, the original definition of Clausius, dS = Dq(rev)/T, applies to a system plus its surroundings, and the Gibbsian relation of
pertains to a system at constant pressure and constant temperature. Only in the present discussion (where an unfortunate term, information "entropy", must be dealt with) would it be necessary to emphasize that temperature is integral to any physical thermodynamic entropy change described via Clausius or Gibbs. In our era we are surer even than they could be that temperature is indispensable in understanding thermodynamic entropy because it indicates the thermal environment of microparticles in a system. That environment sustains the intermolecular motions whereby molecules continuously interchange energy and are able to access the wide range of energetic microstates available to them. It is this ever-present thermal motion that makes spontaneous change possible, even at constant temperature and in the absence of chemical reaction, because it is the mechanism whereby molecules can occupy new energetic microstates if the boundaries of a system are altered. Prime examples of such spontaneous change are diffusion in fluids and the expansion of gases into vacua, both fundamentally due to the additional translational energetic microstates in the enlarged systems. (Of course, spontaneous endothermic processes ranging from phase changes to chemical reactions are also due to mobile energy-transferring microparticles that can access new rotational and vibrational as well as translational energetic microstatesin the thermal surroundings as well as in the chemical system.) Misinterpretation of the Boltzmann equation for entropy change,
ln(number of energetic microstates after change/number of energetic microstates before change), is the source of much of the confusion regarding the behavior of macro objects. R, the gas constant, embeds temperature in Boltzmann's entropy as integrally as in the Clausius or Gibbs relation and, to repeat, the environment's temperature indicates the degree of energy dispersion that makes access to available energy microstates possible. The Boltzmann equation is revelatory in uniting the macrothermodynamics of classic Clausian entropy with what has been described above as the behavior of a system of microparticles occupying energetic microstates. In discussing how probability enters the Boltzmann equation (i.e., the number of possible energetic microstates and their occupancy by microparticles), texts and teachers often enumerate the many ways a few symbolic molecules can be distributed on lines representing energy levels, or in similar cells or boxes, or with combinations of playing cards. Of course these are good analogs for depicting an energetic microsystem. However, even if there are warnings by the instructor, the use of playing cards as a model is probably intellectually hazardous; these objects are so familiar that the student can too easily warp this macro analog of a microsystem into an example of actual entropic change in the cards. Another major source of confusion about entropy change as the result of simply rearranging macro objects comes from information theory "entropy".2 Claude E. Shannon's 1948 paper began the era of quantification of information and in it he adopted the word "entropy" to name the quantity that his equation defined (2). This occurred because a friend, the brilliant mathematician John von Neumann, told him "call it entropy no one knows what entropy really is, so in a debate you will always have the advantage" (3). Wryly funny for that moment, Shannon's unwise acquiescence has produced enormous scientific confusion due to the increasingly widespread usefulness of his equation and its fertile mathematical variations in many fields other than communications (4, 5). Certainly most non-experts hearing of the widely touted information "entropy" would assume its overlap with thermodynamic entropy. However, the great success of information "entropy" has been in areas totally divorced from experimental chemistry, whose objective macro results are dependent on the behavior of energetic microparticles. Nevertheless, many instructors in chemistry have the impression that information "entropy" is not only relevant to the calculations and conclusions of thermodynamic entropy but may change them. This is not true. There is no invariant function corresponding to energy embedded in each of the hundreds of equations of information "entropy" and thus no analog of temperature universally present in them. In contrast, inherent in all thermodynamic entropy, temperature is the objective indicator of a system's energetic state. Probability distributions in information "entropy" represent human selections; therefore information "entropy" is strongly subjective. Probability distributions in thermodynamic entropy are dependent on the microparticulate and physicochemical nature of the system; limited thereby, thermodynamic entropy is strongly objective. This is not to say that the extremely general mathematics of information theory cannot be modified ad hoc and further specifically constrained to yield results that are identical to Gibbs' or Boltzmann's relations (6). This may be important theoretically but it is totally immaterial here; such a modification simply supports conventional thermodynamic results without changing themno lesser nor any greater thermodynamic entropy. The point is that information "entropy" in all of its myriad nonphysicochemical forms as a measure of information or abstract communication has no relevance to the evaluation of thermodynamic entropy change in the movement of macro objects because such information "entropy" does not deal with microparticles whose perturbations are related to temperature.3 Even those who are very competent chemists and physicists have become confused when they have melded or mixed information "entropy" in their consideration of physical thermodynamic entropy. This is shown by the results in textbooks and by the lectures of professors found on the Internet.1 Overall then, how did such an error (concerning entropy changes in macro objects that are simply moved) become part of mainstream instruction, being repeated in print even by distinguished physicists and chemists? The modern term for distorting a photograph, morphing, is probably the best answer. Correct statements of statistical thermodynamics have been progressively altered so that their dependence on the energetics of atoms and molecules is obliterated for the nonprofessional reader and omitted by some author-scientists. The morphing process can be illustrated by the sequence of statements 1 to 4 below.
Framework for multi-resolution analyses of advanced traffic management strategies [summary].
DOT National Transportation Integrated Search
2017-01-01
Transportation planning relies extensively on software that can simulate and predict travel behavior in response to alternative transportation networks. However, different software packages view traffic at different scales. Some programs are based on...
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
Multiresolution forecasting for futures trading using wavelet decompositions.
Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B
2001-01-01
We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hi"p.; Pižurica, Aleksandra; Philips, Wilfried
2011-09-01
The shearlet transform is a recent sibling in the family of geometric image representations that provides a traditional multiresolution analysis combined with a multidirectional analysis. In this paper, we present a fast DFT-based analysis and synthesis scheme for the 2D discrete shearlet transform. Our scheme conforms to the continuous shearlet theory to high extent, provides perfect numerical reconstruction (up to floating point rounding errors) in a non-iterative scheme and is highly suitable for parallel implementation (e.g. FPGA, GPU). We show that our discrete shearlet representation is also a tight frame and the redundancy factor of the transform is around 2.6, independent of the number of analysis directions. Experimental denoising results indicate that the transform performs the same or even better than several related multiresolution transforms, while having a significantly lower redundancy factor.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liakh, Dmitry I
While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locallymore » supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).« less
Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation
NASA Technical Reports Server (NTRS)
Lacaze, Alberto; Meystel, Michael; Meystel, Alex
1994-01-01
This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.
Fiore, Julie L.; Kraemer, Benedikt; Koberling, Felix; Edmann, Rainer; Nesbitt, David J.
2010-01-01
RNA folding thermodynamics are crucial for structure prediction, which requires characterization of both enthalpic and entropic contributions of tertiary motifs to conformational stability. We explore the temperature dependence of RNA folding due to the ubiquitous GAAA tetraloop–receptor docking interaction, exploiting immobilized and freely diffusing single-molecule fluorescence resonance energy transfer (smFRET) methods. The equilibrium constant for intramolecular docking is obtained as a function of temperature (T = 21–47 °C), from which a van’t Hoff analysis yields the enthalpy (ΔH°) and entropy (ΔS°) of docking. Tetraloop–receptor docking is significantly exothermic and entropically unfavorable in 1 mM MgCl2 and 100 mM NaCl, with excellent agreement between immobilized (ΔH° = −17.4 ± 1.6 kcal/mol, and ΔS° = −56.2 ± 5.4 cal mol−1 K−1) and freely diffusing (ΔH° = −17.2 ± 1.6 kcal/mol, and ΔS° = −55.9 ± 5.2 cal mol−1 K−1) species. Kinetic heterogeneity in the tetraloop–receptor construct is unaffected over the temperature range investigated, indicating a large energy barrier for interconversion between the actively docking and nondocking subpopulations. Formation of the tetraloop–receptor interaction can account for ~60% of the ΔH° and ΔS° of P4–P6 domain folding in the Tetrahymena ribozyme, suggesting that it may act as a thermodynamic clamp for the domain. Comparison of the isolated tetraloop–receptor and other tertiary folding thermodynamics supports a theme that enthalpy- versus entropy-driven folding is determined by the number of hydrogen bonding and base stacking interactions. PMID:19186984
Bougias, H; Ghiatas, A; Priovolos, D; Veliou, K; Christou, A
2017-05-01
To retrospectively assess the role of whole-lesion apparent diffusion coefficient (ADC) in the characterization of breast tumors by comparing different histogram metrics. 49 patients with 53 breast lesions underwent magnetic resonance imaging (MRI). ADC histogram parameters, including the mean, mode, 10th/50th/90th percentile, skewness, kurtosis, and entropy ADCs, were derived for the whole-lesion volume in each patient. Mann-Whitney U-test, area under the receiver-operating characteristic curve (AUC) were used for statistical analysis. The mean, mode and 10th/50th/90th percentile ADC values were significantly lower in malignant lesions compared with benign ones (all P < 0.0001), while skewness was significantly higher in malignant lesions P = 0.02. However, no significant difference was found between entropy and kurtosis values in malignant lesions compared with benign ones (P = 0.06 and P = 1.00, respectively). Univariate logistic regression showed that 10th and 50th percentile ADC yielded the highest AUC (0.985; 95% confidence interval [CI]: 0.902, 1.000 and 0.982; 95% confidence interval [CI]: 0.896, 1.000 respectively), whereas kurtosis value yielded the lowest AUC (0.500; 95% CI: 0.355, 0.645), indicating that 10th and 50th percentile ADC values may be more accurate for lesion discrimination. Whole-lesion ADC histogram analysis could be a helpful index in the characterization and differentiation between benign and malignant breast lesions with the 10th and 50th percentile ADC be the most accurate discriminators. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bentz, Daniel N.; Betush, William; Jackson, Kenneth A.
2003-01-01
In this paper we report on two related topics: Kinetic Monte Carlo simulations of the steady state growth of rod eutectics from the melt, and a study of the surface roughness of binary alloys. We have implemented a three dimensional kinetic Monte Carlo (kMC) simulation with diffusion by pair exchange only in the liquid phase. Entropies of fusion are first chosen to fit the surface roughness of the pure materials, and the bond energies are derived from the equilibrium phase diagram, by treating the solid and liquid as regular and ideal solutions respectively. A simple cubic lattice oriented in the {100} direction is used. Growth of the rods is initiated from columns of pure B material embedded in an A matrix, arranged in a close packed array with semi-periodic boundary conditions. The simulation cells typically have dimensions of 50 by 87 by 200 unit cells. Steady state growth is compliant with the Jackson-Hunt model. In the kMC simulations, using the spin-one Ising model, growth of each phase is faceted or nonfaceted phases depending on the entropy of fusion. There have been many studies of the surface roughening transition in single component systems, but none for binary alloy systems. The location of the surface roughening transition for the phases of a eutectic alloy determines whether the eutectic morphology will be regular or irregular. We have conducted a study of surface roughness on the spin-one Ising Model with diffusion using kMC. The surface roughness was found to scale with the melting temperature of the alloy as given by the liquidus line on the equilibrium phase diagram. The density of missing lateral bonds at the surface was used as a measure of surface roughness.
NASA Astrophysics Data System (ADS)
He, Y.; Puckett, E. G.; Billen, M. I.; Kellogg, L. H.
2016-12-01
For a convection-dominated system, like convection in the Earth's mantle, accurate modeling of the temperature field in terms of the interaction between convective and diffusive processes is one of the most common numerical challenges. In the geodynamics community using Finite Element Method (FEM) with artificial entropy viscosity is a popular approach to resolve this difficulty, but introduce numerical diffusion. The extra artificial viscosity added into the temperature system will not only oversmooth the temperature field where the convective process dominates, but also change the physical properties by increasing the local material conductivity, which will eventually change the local conservation of energy. Accurate modeling of temperature is especially important in the mantle, where material properties are strongly dependent on temperature. In subduction zones, for example, the rheology of the cold sinking slab depends nonlinearly on the temperature, and physical processes such as slab detachment, rollback, and melting all are sensitively dependent on temperature and rheology. Therefore methods that overly smooth the temperature may inaccurately represent the physical processes governing subduction, lithospheric instabilities, plume generation and other aspects of mantle convection. Here we present a method for modeling the temperature field in mantle dynamics simulations using a new solver implemented in the ASPECT software. The new solver for the temperature equation uses a Discontinuous Galerkin (DG) approach, which combines features of both finite element and finite volume methods, and is particularly suitable for problems satisfying the conservation law, and the solution has a large variation locally. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a local discrete maximum principle in order to eliminate the overshoots and undershoots in the temperature locally. To demonstrate the capabilities of this new method we present benchmark results (e.g., falling sphere), and a simple subduction models with kinematic surface boundary condition. To evaluate the trade-offs in computational speed and solution accuracy we present results for the same benchmarks using the Finite Element entropy viscosity method available in ASPECT.
On quantum Rényi entropies: A new generalization and some properties
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco
2013-12-01
The Rényi entropies constitute a family of information measures that generalizes the well-known Shannon entropy, inheriting many of its properties. They appear in the form of unconditional and conditional entropies, relative entropies, or mutual information, and have found many applications in information theory and beyond. Various generalizations of Rényi entropies to the quantum setting have been proposed, most prominently Petz's quasi-entropies and Renner's conditional min-, max-, and collision entropy. However, these quantum extensions are incompatible and thus unsatisfactory. We propose a new quantum generalization of the family of Rényi entropies that contains the von Neumann entropy, min-entropy, collision entropy, and the max-entropy as special cases, thus encompassing most quantum entropies in use today. We show several natural properties for this definition, including data-processing inequalities, a duality relation, and an entropic uncertainty relation.
The defect chemistry of UO2 ± x from atomistic simulations
NASA Astrophysics Data System (ADS)
Cooper, M. W. D.; Murphy, S. T.; Andersson, D. A.
2018-06-01
Control of the defect chemistry in UO2 ± x is important for manipulating nuclear fuel properties and fuel performance. For example, the uranium vacancy concentration is critical for fission gas release and sintering, while all oxygen and uranium defects are known to strongly influence thermal conductivity. Here the point defect concentrations in thermal equilibrium are predicted using defect energies from density functional theory (DFT) and vibrational entropies calculated using empirical potentials. Electrons and holes have been treated in a similar fashion to other charged defects allowing for structural relaxation around the localized electronic defects. Predictions are made for the defect concentrations and non-stoichiometry of UO2 ± x as a function of oxygen partial pressure and temperature. If vibrational entropy is omitted, oxygen interstitials are predicted to be the dominant mechanism of excess oxygen accommodation over only a small temperature range (1265 K-1350 K), in contrast to experimental observation. Conversely, if vibrational entropy is included oxygen interstitials dominate from 1165 K to 1680 K (Busker potential) or from 1275 K to 1630 K (CRG potential). Below these temperature ranges, excess oxygen is predicted to be accommodated by uranium vacancies, while above them the system is hypo-stoichiometric with oxygen deficiency accommodated by oxygen vacancies. Our results are discussed in the context of oxygen clustering, formation of U4O9, and issues for fuel behavior. In particular, the variation of the uranium vacancy concentrations as a function of temperature and oxygen partial pressure will underpin future studies into fission gas diffusivity and broaden the understanding of UO2 ± x sintering.
Galaxy Clusters: A Novel Look at Diffuse Baryons Withstanding Dark Matter Gravity
NASA Astrophysics Data System (ADS)
Cavaliere, A.; Lapi, A.; Fusco-Femiano, R.
2009-06-01
In galaxy clusters, the equilibria of the intracluster plasma (ICP) and of the gravitationally dominant dark matter (DM) are governed by the hydrostatic equation and by the Jeans equation, respectively; in either case gravity is withstood by the corresponding, entropy-modulated pressure. Jeans, with the DM "entropy" set to K vprop r α and α ≈ 1.25-1.3 applying from groups to rich clusters, yields our radial α-profiles these, compared to the empirical Navarro-Frenk-White distribution, are flatter at the center and steeper in the outskirts as required by recent gravitational lensing data. In the ICP, on the other hand, the entropy run k(r) is mainly shaped by shocks, as steadily set by supersonic accretion of gas at the cluster boundary, and intermittently driven from the center by merging events or by active galactic nuclei (AGNs); the resulting equilibrium is described by the exact yet simple formalism constituting our ICP Supermodel. With two parameters, this accurately represents the runs of density n(r) and temperature T(r) as required by up-to-date X-ray data on surface brightness and spectroscopy for both cool core (CC) and non-cool core (NCC) clusters; the former are marked by a middle temperature peak, whose location is predicted from rich clusters to groups. The Supermodel inversely links the inner runs of n(r) and T(r), and highlights their central scaling with entropy nc vprop k -1 c and Tc vprop k 0.35 c , to yield radiative cooling times tc ≈ 0.3(kc /15 keV cm2)1.2 Gyr. We discuss the stability of the central values so focused: against radiative erosion of kc in the cool dense conditions of CC clusters, that triggers recurrent AGN activities resetting it back; or against energy inputs from AGNs and mergers whose effects are saturated by the hot central conditions of NCC clusters. From the Supermodel, we also derive as limiting cases the classic polytropic β-models, and the "mirror" model with T(r) vprop σ2(r) suitable for NCC and CC clusters, respectively; these limiting cases highlight how the ICP temperature T(r) strives to mirror the DM velocity dispersion σ2(r) away from energy and entropy injections. Finally, we discuss how the Supermodel connects information derived from X-ray and gravitational lensing observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Pooja; Ali, Sk. M., E-mail: musharaf@barc.gov.in; Shenoy, K. T.
2015-02-21
Thermodynamic properties of the fluid in the hydrophobic pores of nanotubes are known to be different not only from the bulk phase but also from other conventional confinements. Here, we use a recently developed theoretical scheme of “two phase thermodynamic (2PT)” model to understand the driving forces inclined to spontaneous filling of carbon nanotubes (CNTs) with polar (water) and nonpolar (methane) fluids. The CNT confinement is found to be energetically favorable for both water and methane, leading to their spontaneous filling inside CNT(6,6). For both the systems, the free energy of transfer from bulk to CNT confinement is favored bymore » the increased entropy (TΔS), i.e., increased translational entropy and increased rotational entropy, which were found to be sufficiently high to conquer the unfavorable increase in enthalpy (ΔE) when they are transferred inside CNT. To the best of our knowledge, this is the first time when it has been established that the increase in translational entropy during confinement in CNT(6,6) is not unique to water-like H bonding fluid but is also observed in case of nonpolar fluids such as methane. The thermodynamic results are explained in terms of density, structural rigidity, and transport of fluid molecules inside CNT. The faster diffusion of methane over water in bulk phase is found to be reversed during the confinement in CNT(6,6). Studies reveal that though hydrogen bonding plays an important role in transport of water through CNT, but it is not the solitary driving factor, as the nonpolar fluids, which do not have any hydrogen bond formation capacity can go inside CNT and also can flow through it. The associated driving force for filling and transport of water and methane is enhanced translational and rotational entropies, which are attributed mainly by the strong correlation between confined fluid molecules and availability of more free space for rotation of molecule, i.e., lower density of fluid inside CNT due to their single file-like arrangement. To the best of our information, this is perhaps the first study of nonpolar fluid within CNT using 2PT method. Furthermore, the fast flow of polar fluid (water) over nonpolar fluid (methane) has been captured for the first time using molecular dynamic simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiolo, M., E-mail: massimo.maiolo@zhaw.ch; ZHAW, Institut für Angewandte Simulation, Grüental, CH-8820 Wädenswil; Vancheri, A., E-mail: alberto.vancheri@supsi.ch
In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficientmore » orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.« less
NASA Astrophysics Data System (ADS)
Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko
2011-03-01
Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.
A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field
NASA Astrophysics Data System (ADS)
Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.
2016-10-01
We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Upper entropy axioms and lower entropy axioms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jin-Li, E-mail: phd5816@163.com; Suo, Qi
2015-04-15
The paper suggests the concepts of an upper entropy and a lower entropy. We propose a new axiomatic definition, namely, upper entropy axioms, inspired by axioms of metric spaces, and also formulate lower entropy axioms. We also develop weak upper entropy axioms and weak lower entropy axioms. Their conditions are weaker than those of Shannon–Khinchin axioms and Tsallis axioms, while these conditions are stronger than those of the axiomatics based on the first three Shannon–Khinchin axioms. The subadditivity and strong subadditivity of entropy are obtained in the new axiomatics. Tsallis statistics is a special case of satisfying our axioms. Moreover,more » different forms of information measures, such as Shannon entropy, Daroczy entropy, Tsallis entropy and other entropies, can be unified under the same axiomatics.« less
DOT National Transportation Integrated Search
2014-07-01
Pavement Condition surveys are carried out periodically to gather information on pavement distresses that will guide decision-making for maintenance and preservation. Traditional methods involve manual pavement inspections which are time-consuming : ...
A new national mosaic of state landcover data
Thomas, I.; Handley, Lawrence R.; D'Erchia, Frank J.; Charron, Tammy M.
2000-01-01
This presentation reviewed current landcover mapping efforts and presented a new preliminary, national mosaic of Gap Analysis Program (GAP) and Multi-Resolution Land Characteristics Consortium (MRLC) landcover data with a discussion of techniques, problems faced, and future refinements.
Framework for multi-resolution analyses of advanced traffic management strategies.
DOT National Transportation Integrated Search
2016-11-01
Demand forecasting models and simulation models have been developed, calibrated, and used in isolation of each other. However, the advancement of transportation system technologies and strategies, the increase in the availability of data, and the unc...
Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure
NASA Astrophysics Data System (ADS)
Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.
2014-08-01
Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.
EEG entropy measures in anesthesia
Liang, Zhenhu; Wang, Yinghua; Sun, Xue; Li, Duan; Voss, Logan J.; Sleigh, Jamie W.; Hagihira, Satoshi; Li, Xiaoli
2015-01-01
Highlights: ► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression. Objective: Entropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents. Methods: Twelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared. Results: All the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R2) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA. Conclusion: Each entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA. PMID:25741277
Structure, dynamics, and thermodynamics of a family of potentials with tunable softness
NASA Astrophysics Data System (ADS)
Shi, Zane; Debenedetti, Pablo G.; Stillinger, Frank H.; Ginart, Paul
2011-08-01
We investigate numerically the structure, thermodynamics, and relaxation behavior of a family of (n, 6) Lennard-Jones-like glass-forming binary mixtures interacting via pair potentials with variable softness, fixed well depth, and fixed well depth location. These constraints give rise to progressively more negative attractive tails upon softening, for separations greater than the potential energy minimum. Over the range of conditions examined, we find only modest dependence of structure on softness. In contrast, decreasing the repulsive exponent from n = 12 to n = 7 causes the diffusivity to increase by as much as two orders of magnitude at fixed temperature and density, and produces mechanically stable packings (inherent structures) with cohesive energies that are, on average, ˜1.7 well depths per particle larger than for the corresponding Lennard-Jones (n = 12) case. The softer liquids have markedly higher entropies and lower Kauzmann temperatures than their Lennard-Jones (n = 12) counterparts, and they remain diffusive down to appreciably lower temperatures. We find that softening leads to a modest increase in fragility.
A model of return intervals between earthquake events
NASA Astrophysics Data System (ADS)
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
Numerical simulation of the hydrodynamical combustion to strange quark matter
NASA Astrophysics Data System (ADS)
Niebergal, Brian; Ouyed, Rachid; Jaikumar, Prashanth
2010-12-01
We present results from a numerical solution to the burning of neutron matter inside a cold neutron star into stable u,d,s quark matter. Our method solves hydrodynamical flow equations in one dimension with neutrino emission from weak equilibrating reactions, and strange quark diffusion across the burning front. We also include entropy change from heat released in forming the stable quark phase. Our numerical results suggest burning front laminar speeds of 0.002-0.04 times the speed of light, much faster than previous estimates derived using only a reactive-diffusive description. Analytic solutions to hydrodynamical jump conditions with a temperature-dependent equation of state agree very well with our numerical findings for fluid velocities. The most important effect of neutrino cooling is that the conversion front stalls at lower density (below ≈2 times saturation density). In a two-dimensional setting, such rapid speeds and neutrino cooling may allow for a flame wrinkle instability to develop, possibly leading to detonation.
Momentum and charge transport in non-relativistic holographic fluids from Hořava gravity
NASA Astrophysics Data System (ADS)
Davison, Richard A.; Grozdanov, Sašo; Janiszewski, Stefan; Kaminski, Matthias
2016-11-01
We study the linearized transport of transverse momentum and charge in a conjectured field theory dual to a black brane solution of Hořava gravity with Lifshitz exponent z = 1. As expected from general hydrodynamic reasoning, we find that both of these quantities are diffusive over distance and time scales larger than the inverse temperature. We compute the diffusion constants and conductivities of transverse momentum and charge, as well the ratio of shear viscosity to entropy density, and find that they differ from their relativistic counterparts. To derive these results, we propose how the holographic dictionary should be modified to deal with the multiple horizons and differing propagation speeds of bulk excitations in Hořava gravity. When possible, as a check on our methods and results, we use the covariant Einstein-Aether formulation of Hořava gravity, along with field redefinitions, to re-derive our results from a relativistic bulk theory.
Adsorption characteristics of sol gel-derived zirconia for cesium ions from aqueous solutions.
Yakout, Sobhy M; Hassan, Hisham S
2014-07-01
Zirconia powder was synthesized via a sol gel method and placed in a batch reactor for cesium removal investigation. X-ray analysis and Fourier transform infrared spectroscopy were utilized for the evaluation of the developed adsorbent. The adsorption process has been investigated as a function of pH, contact time and temperature. The adsorption is strongly dependent on the pH of the medium whereby the removal efficiency increases as the pH turns to the alkaline range. The process was initially very fast and the maximum adsorption was attained within 60 min of contact. A pseudo-second-order model and homogeneous particle diffusion model (HPDM) were found to be the best to correlate the diffusion of cesium into the zirconia particles. Furthermore, adsorption thermodynamic parameters, namely the standard enthalpy, entropy, and Gibbs free energy, were calculated. The results indicate that cesium adsorption by zirconia is an endothermic (ΔH>0) process and good affinity of cesium ions towards the sorbent (ΔS>0) was observed.
Continuous information flow fluctuations
NASA Astrophysics Data System (ADS)
Rosinberg, Martin Luc; Horowitz, Jordan M.
2016-10-01
Information plays a pivotal role in the thermodynamics of nonequilibrium processes with feedback. However, much remains to be learned about the nature of information fluctuations in small-scale devices and their relation with fluctuations in other thermodynamics quantities, like heat and work. Here we derive a series of fluctuation theorems for information flow and partial entropy production in a Brownian particle model of feedback cooling and extend them to arbitrary driven diffusion processes. We then analyze the long-time behavior of the feedback-cooling model in detail. Our results provide insights into the structure and origin of large deviations of information and thermodynamic quantities in autonomous Maxwell's demons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapas, Luciano C., E-mail: luciano.lapas@unila.edu.br; Ferreira, Rogelma M. S., E-mail: rogelma.maria@gmail.com; Rubí, J. Miguel, E-mail: mrubi@ub.edu
2015-03-14
We analyze the temperature relaxation phenomena of systems in contact with a thermal reservoir that undergoes a non-Markovian diffusion process. From a generalized Langevin equation, we show that the temperature is governed by a law of cooling of the Newton’s law type in which the relaxation time depends on the velocity autocorrelation and is then characterized by the memory function. The analysis of the temperature decay reveals the existence of an anomalous cooling in which the temperature may oscillate. Despite this anomalous behavior, we show that the variation of entropy remains always positive in accordance with the second law ofmore » thermodynamics.« less
A centroid molecular dynamics study of liquid para-hydrogen and ortho-deuterium.
Hone, Tyler D; Voth, Gregory A
2004-10-01
Centroid molecular dynamics (CMD) is applied to the study of collective and single-particle dynamics in liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The CMD results are compared with the results of classical molecular dynamics, quantum mode coupling theory, a maximum entropy analytic continuation approach, pair-product forward- backward semiclassical dynamics, and available experimental results. The self-diffusion constants are in excellent agreement with the experimental measurements for all systems studied. Furthermore, it is shown that the method is able to adequately describe both the single-particle and collective dynamics of quantum liquids. (c) 2004 American Institute of Physics
Quantifying edge significance on maintaining global connectivity
Qian, Yuhua; Li, Yebin; Zhang, Min; Ma, Guoshuai; Lu, Furong
2017-01-01
Global connectivity is a quite important issue for networks. The failures of some key edges may lead to breakdown of the whole system. How to find them will provide a better understanding on system robustness. Based on topological information, we propose an approach named LE (link entropy) to quantify the edge significance on maintaining global connectivity. Then we compare the LE with the other six acknowledged indices on the edge significance: the edge betweenness centrality, degree product, bridgeness, diffusion importance, topological overlap and k-path edge centrality. Experimental results show that the LE approach outperforms in quantifying edge significance on maintaining global connectivity. PMID:28349923
Chaimovich, Aviel; Shell, M Scott
2009-03-28
Recent efforts have attempted to understand many of liquid water's anomalous properties in terms of effective spherically-symmetric pairwise molecular interactions entailing two characteristic length scales (so-called "core-softened" potentials). In this work, we examine the extent to which such simple descriptions of water are representative of the true underlying interactions by extracting coarse-grained potential functions that are optimized to reproduce the behavior of an all-atom model. To perform this optimization, we use a novel procedure based upon minimizing the relative entropy, a quantity that measures the extent to which a coarse-grained configurational ensemble overlaps with a reference all-atom one. We show that the optimized spherically-symmetric water models exhibit notable variations with the state conditions at which they were optimized, reflecting in particular the shifting accessibility of networked hydrogen bonding interactions. Moreover, we find that water's density and diffusivity anomalies are only reproduced when the effective coarse-grained potentials are allowed to vary with state. Our results therefore suggest that no state-independent spherically-symmetric potential can fully capture the interactions responsible for water's unique behavior; rather, the particular way in which the effective interactions vary with temperature and density contributes significantly to anomalous properties.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan
2014-09-01
In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holroyd, R.A.; Schwarz, H.A.; Stradowska, E.
The rate constants for attachment of excess electrons to 1,3-butadiene (k[sub a]) and detachment from the butadiene anion (k[sub d]) in n-hexane are reported. The equilibrium constant, K[sub eq] = k[sub a]/k[sub d], increases rapidly with pressure and decreases as the temperature increases. At -7[degree]C attachment is observed at 1 bar. At high pressures the attachment rate is diffusion controlled. The activation energy for detachment is about 21 kcal/mol; detachment is facilitated by the large entropy of activation. The reaction volumes for attachment range from -181 cm[sup 3]/mol at 400 bar to-122 cm[sup 3]/mol at 1500 bar and are largelymore » attributed to the electrostriction volume of the butadiene anion ([Delta][bar V][sub el]). Values of [Delta][bar V][sub el] calculated by a model, which includes a glassy shell of solvent molecules around the ion, are in agreement with experimental reaction volumes. The analysis indicates the partial molar volume of the electron in hexane is small and probably negative. It is shown that the entropies of reaction are closely related to the partial molar volumes of reaction. 22 refs., 5 figs., 5 tabs.« less
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2016-12-01
A point source on a plane constantly emits particles which rapidly diffuse and then stick to a growing cluster. The growth probability of a cluster is presented as a sum over all possible scenarios leading to the same final shape. The classical point for the action, defined as a minus logarithm of the growth probability, describes the most probable scenario and reproduces the Laplacian growth equation, which embraces numerous fundamental free boundary dynamics in nonequilibrium physics. For nonclassical scenarios we introduce virtual point sources, in which presence the action becomes the Kullback-Leibler entropy. Strikingly, this entropy is shown to be the sum of electrostatic energies of layers grown per elementary time unit. Hence the growth probability of the presented nonequilibrium process obeys the Gibbs-Boltzmann statistics, which, as a rule, is not applied out from equilibrium. Each layer's probability is expressed as a product of simple factors in an auxiliary complex plane after a properly chosen conformal map. The action at this plane is a sum of Robin functions, which solve the Liouville equation. At the end we establish connections of our theory with the τ function of the integrable Toda hierarchy and with the Liouville theory for noncritical quantum strings.
Refined two-index entropy and multiscale analysis for complex system
NASA Astrophysics Data System (ADS)
Bian, Songhan; Shang, Pengjian
2016-10-01
As a fundamental concept in describing complex system, entropy measure has been proposed to various forms, like Boltzmann-Gibbs (BG) entropy, one-index entropy, two-index entropy, sample entropy, permutation entropy etc. This paper proposes a new two-index entropy Sq,δ and we find the new two-index entropy is applicable to measure the complexity of wide range of systems in the terms of randomness and fluctuation range. For more complex system, the value of two-index entropy is smaller and the correlation between parameter δ and entropy Sq,δ is weaker. By combining the refined two-index entropy Sq,δ with scaling exponent h(δ), this paper analyzes the complexities of simulation series and classifies several financial markets in various regions of the world effectively.
Microcanonical entropy for classical systems
NASA Astrophysics Data System (ADS)
Franzosi, Roberto
2018-03-01
The entropy definition in the microcanonical ensemble is revisited. We propose a novel definition for the microcanonical entropy that resolve the debate on the correct definition of the microcanonical entropy. In particular we show that this entropy definition fixes the problem inherent the exact extensivity of the caloric equation. Furthermore, this entropy reproduces results which are in agreement with the ones predicted with standard Boltzmann entropy when applied to macroscopic systems. On the contrary, the predictions obtained with the standard Boltzmann entropy and with the entropy we propose, are different for small system sizes. Thus, we conclude that the Boltzmann entropy provides a correct description for macroscopic systems whereas extremely small systems should be better described with the entropy that we propose here.
On S-mixing entropy of quantum channels
NASA Astrophysics Data System (ADS)
Mukhamedov, Farrukh; Watanabe, Noboru
2018-06-01
In this paper, an S-mixing entropy of quantum channels is introduced as a generalization of Ohya's S-mixing entropy. We investigate several properties of the introduced entropy. Moreover, certain relations between the S-mixing entropy and the existing map and output entropies of quantum channels are investigated as well. These relations allowed us to find certain connections between separable states and the introduced entropy. Hence, there is a sufficient condition to detect entangled states. Moreover, several properties of the introduced entropy are investigated. Besides, entropies of qubit and phase-damping channels are calculated.
Multiresolution strategies for the numerical solution of optimal control problems
NASA Astrophysics Data System (ADS)
Jain, Sachin
There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed. For such problems, high accuracy is desirable only in the immediate future, yet the ultimate mission objectives should be accommodated as well. An intelligent trajectory generation for such situations is thus enabled by introducing the idea of multigrid temporal resolution to solve the associated trajectory optimization problem on a non-uniform grid across time that is adapted to: (i) immediate future, and (ii) potential discontinuities in the state and control variables.
INTEGRATING MESO-AND MICRO-SIMULATION MODELS TO EVALUATE TRAFFIC MANAGEMENT STRATEGIES, YEAR 2
DOT National Transportation Integrated Search
2017-07-04
In the Year 1 Report, the Arizona State University (ASU) Project Team described the development of a hierarchical multi-resolution simulation platform to test proactive traffic management strategies. The scope was to integrate an easily available mic...
MRLC-LAND COVER MAPPING, ACCURACY ASSESSMENT AND APPLICATION RESEARCH
The National Land Cover Database (NLCD), produced by the Multi-Resolution Land Characteristics (MRLC) provides consistently classified land-cover and ancillary data for the United States. These data support many of the modeling and monitoring efforts related to GPRA goals of Cle...
DOT National Transportation Integrated Search
2016-06-01
In this project the researchers developed a hierarchical multi-resolution traffic simulation system for metropolitan areas, referred to as MetroSim. Categorically, the focus is on integrating two types of simulation: microscopic simulation in which i...
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
Multiresolution MR elastography using nonlinear inversion
McGarry, M. D. J.; Van Houten, E. E. W.; Johnson, C. L.; Georgiadis, J. G.; Sutton, B. P.; Weaver, J. B.; Paulsen, K. D.
2012-01-01
Purpose: Nonlinear inversion (NLI) in MR elastography requires discretization of the displacement field for a finite element (FE) solution of the “forward problem”, and discretization of the unknown mechanical property field for the iterative solution of the “inverse problem”. The resolution requirements for these two discretizations are different: the forward problem requires sufficient resolution of the displacement FE mesh to ensure convergence, whereas lowering the mechanical property resolution in the inverse problem stabilizes the mechanical property estimates in the presence of measurement noise. Previous NLI implementations use the same FE mesh to support the displacement and property fields, requiring a trade-off between the competing resolution requirements. Methods: This work implements and evaluates multiresolution FE meshes for NLI elastography, allowing independent discretizations of the displacements and each mechanical property parameter to be estimated. The displacement resolution can then be selected to ensure mesh convergence, and the resolution of the property meshes can be independently manipulated to control the stability of the inversion. Results: Phantom experiments indicate that eight nodes per wavelength (NPW) are sufficient for accurate mechanical property recovery, whereas mechanical property estimation from 50 Hz in vivo brain data stabilizes once the displacement resolution reaches 1.7 mm (approximately 19 NPW). Viscoelastic mechanical property estimates of in vivo brain tissue show that subsampling the loss modulus while holding the storage modulus resolution constant does not substantially alter the storage modulus images. Controlling the ratio of the number of measurements to unknown mechanical properties by subsampling the mechanical property distributions (relative to the data resolution) improves the repeatability of the property estimates, at a cost of modestly decreased spatial resolution. Conclusions: Multiresolution NLI elastography provides a more flexible framework for mechanical property estimation compared to previous single mesh implementations. PMID:23039674
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
NASA Astrophysics Data System (ADS)
Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.; Houser, C.
2014-12-01
Aeolian systems are ideal natural laboratories for examining self-organization in patterned landscapes, as certain wind regimes generate certain morphologies. Topographic information and scale dependent analysis offer the opportunity to study such systems and characterize process-form relationships. A statistically based methodology for differentiating aeolian features would enable the quantitative association of certain surface characteristics with certain morphodynamic regimes. We conducted a multi-resolution analysis of LiDAR elevation data to assess scale-dependent morphometric variations in an aeolian landscape in South Texas. For each pixel, mean elevation values are calculated along concentric circles moving outward at 100-meter intervals (i.e. 500 m, 600 m, 700 m from pixel). The calculated average elevation values plotted against distance from the pixel of interest as curves are used to differentiate multi-scalar variations in elevation across the landscape. In this case, it is hypothesized these curves may be used to quantitatively differentiate certain morphometries from others like a spectral signature may be used to classify paved surfaces from natural vegetation, for example. After generating multi-resolution curves for all the pixels in a selected area of interest (AOI), a Principal Components Analysis is used to highlight commonalities and singularities between generated curves from pixels across the AOI. Our findings suggest that the resulting components could be used for identification of discrete aeolian features like open sands, trailing ridges and active dune crests, and, in particular, zones of deflation. This new approach to landscape characterization not only works to mitigate bias introduced when researchers must select training pixels for morphometric investigations, but can also reveal patterning in aeolian landscapes that would not be as obvious without quantitative characterization.
Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches
NASA Astrophysics Data System (ADS)
Duchaineau, Mark
2001-06-01
Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan
2014-03-01
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-01-01
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Integrated Multiscale Modeling of Molecular Computing Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for themore » free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.« less
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.
Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong
2015-12-12
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.
Quantile based Tsallis entropy in residual lifetime
NASA Astrophysics Data System (ADS)
Khammar, A. H.; Jahanshahi, S. M. A.
2018-02-01
Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.
Time-dependent entropy evolution in microscopic and macroscopic electromagnetic relaxation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker-Jarvis, James
This paper is a study of entropy and its evolution in the time and frequency domains upon application of electromagnetic fields to materials. An understanding of entropy and its evolution in electromagnetic interactions bridges the boundaries between electromagnetism and thermodynamics. The approach used here is a Liouville-based statistical-mechanical theory. I show that the microscopic entropy is reversible and the macroscopic entropy satisfies an H theorem. The spectral entropy development can be very useful for studying the frequency response of materials. Using a projection-operator based nonequilibrium entropy, different equations are derived for the entropy and entropy production and are applied tomore » the polarization, magnetization, and macroscopic fields. I begin by proving an exact H theorem for the entropy, progress to application of time-dependent entropy in electromagnetics, and then apply the theory to relevant applications in electromagnetics. The paper concludes with a discussion of the relationship of the frequency-domain form of the entropy to the permittivity, permeability, and impedance.« less
Entropy flow and entropy production in the human body in basal conditions.
Aoki, I
1989-11-08
Entropy inflow and outflow for the naked human body in basal conditions in the respiration calorimeter due to infrared radiation, convection, evaporation of water and mass-flow are calculated by use of the energetic data obtained by Hardy & Du Bois. Also, the change of entropy content in the body is estimated. The entropy production in the human body is obtained as the change of entropy content minus the net entropy flow into the body. The entropy production thus calculated becomes positive. The magnitude of entropy production per effective radiating surface area does not show any significant variation with subjects. The entropy production is nearly constant at the calorimeter temperatures of 26-32 degrees C; the average in this temperature range is 0.172 J m-2 sec-1 K-1. The forced air currents around the human body and also clothing have almost no effect in changing the entropy production. Thus, the entropy production of the naked human body in basal conditions does not depend on its environmental factors.
Maximum entropy production principle for geostrophic turbulence
NASA Astrophysics Data System (ADS)
Sommeria, J.; Bouchet, F.; Chavanis, P. H.
2003-04-01
In 2D turbulence, complex stirring leads to the formation of steady organized states, once fine scale fluctuations have been filtered out. This self-organization can be explained in terms of statistical equilibrium for vorticity, as the most likely outcome of vorticity parcel rearrangements with the constraints of the conservation laws. A mixing entropy describing the vorticity rearrangements is introduced. Extension to the shallow water system has been proposed by Chavanis P.H. and Sommeria J. (2002), Phys. Rev. E. Generalization to multi-layer geostrophic flows is formally straightforward. Outside equilibrium, eddy fluxes should drive the system toward equilibrium, in the spirit of non equilibrium linear thermodynamics. This can been formalized in terms of a principle of maximum entropy production (MEP), as shown by Robert and Sommeria (1991), Phys. Rev. Lett. 69. Then a parameterization of eddy fluxes is obtained, involving an eddy diffusivity plus a drift term acting at larger scale. These two terms balance each other at equilibrium, resulting in a non trivial steady flow, which is the mean state of the statistical equilibrium. Applications of this eddy parametrization will be presented, in the context of oceanic circulation and Jupiter's Great Red Spot. Quantitative tests will be discussed, obtained by comparisons with direct numerical simulations. Kinetic models, inspired from plasma physics, provide a more precise description of the relaxation toward equilibrium, as shown by Chavanis P.H. 2000 ``Quasilinear theory of the 2D Euler equation'', Phys. Rev. Lett. 84. This approach provides relaxation equations with a form similar to the MEP, but not identical. In conclusion, the MEP provides the right trends of the system but its precise justification remains elusive.
Numerical study of the directed polymer in a 1 + 3 dimensional random medium
NASA Astrophysics Data System (ADS)
Monthus, C.; Garel, T.
2006-09-01
The directed polymer in a 1+3 dimensional random medium is known to present a disorder-induced phase transition. For a polymer of length L, the high temperature phase is characterized by a diffusive behavior for the end-point displacement R2 ˜L and by free-energy fluctuations of order ΔF(L) ˜O(1). The low-temperature phase is characterized by an anomalous wandering exponent R2/L ˜Lω and by free-energy fluctuations of order ΔF(L) ˜Lω where ω˜0.18. In this paper, we first study the scaling behavior of various properties to localize the critical temperature Tc. Our results concerning R2/L and ΔF(L) point towards 0.76 < Tc ≤T2=0.79, so our conclusion is that Tc is equal or very close to the upper bound T2 derived by Derrida and coworkers (T2 corresponds to the temperature above which the ratio bar{Z_L^2}/(bar{Z_L})^2 remains finite as L ↦ ∞). We then present histograms for the free-energy, energy and entropy over disorder samples. For T ≫Tc, the free-energy distribution is found to be Gaussian. For T ≪Tc, the free-energy distribution coincides with the ground state energy distribution, in agreement with the zero-temperature fixed point picture. Moreover the entropy fluctuations are of order ΔS ˜L1/2 and follow a Gaussian distribution, in agreement with the droplet predictions, where the free-energy term ΔF ˜Lω is a near cancellation of energy and entropy contributions of order L1/2.
Resting State Brain Entropy Alterations in Relapsing Remitting Multiple Sclerosis.
Zhou, Fuqing; Zhuang, Ying; Gong, Honghan; Zhan, Jie; Grossman, Murray; Wang, Ze
2016-01-01
Brain entropy (BEN) mapping provides a novel approach to characterize brain temporal dynamics, a key feature of human brain. Using resting state functional magnetic resonance imaging (rsfMRI), reliable and spatially distributed BEN patterns have been identified in normal brain, suggesting a potential use in clinical populations since temporal brain dynamics and entropy may be altered in disease conditions. The purpose of this study was to characterize BEN in multiple sclerosis (MS), a neurodegenerative disease that affects millions of people. Since currently there is no cure for MS, developing treatment or medication that can slow down its progression represents a high research priority, for which validating a brain marker sensitive to disease and the related functional impairments is essential. Because MS can start long time before any measurable symptoms and structural deficits, assessing the dynamic brain activity and correspondingly BEN may provide a critical way to study MS and its progression. Because BEN is new to MS, we aimed to assess BEN alterations in the relapsing-remitting MS (RRMS) patients using a patient versus control design, to examine the correlation of BEN to clinical measurements, and to check the correlation of BEN to structural brain measures which have been more often used in MS studies. As compared to controls, RRMS patients showed increased BEN in motor areas, executive control area, spatial coordinating area, and memory system. Increased BEN was related to greater disease severity as measured by the expanded disability status scale (EDSS) and greater tissue damage as indicated by the mean diffusivity. Patients also showed decreased BEN in other places, which was associated with less disability or fatigue, indicating a disease-related BEN re-distribution. Our results suggest BEN as a novel and useful tool for characterizing RRMS.
Can histogram analysis of MR images predict aggressiveness in pancreatic neuroendocrine tumors?
De Robertis, Riccardo; Maris, Bogdan; Cardobi, Nicolò; Tinazzi Martini, Paolo; Gobbo, Stefano; Capelli, Paola; Ortolani, Silvia; Cingarlini, Sara; Paiella, Salvatore; Landoni, Luca; Butturini, Giovanni; Regi, Paolo; Scarpa, Aldo; Tortora, Giampaolo; D'Onofrio, Mirko
2018-06-01
To evaluate MRI derived whole-tumour histogram analysis parameters in predicting pancreatic neuroendocrine neoplasm (panNEN) grade and aggressiveness. Pre-operative MR of 42 consecutive patients with panNEN >1 cm were retrospectively analysed. T1-/T2-weighted images and ADC maps were analysed. Histogram-derived parameters were compared to histopathological features using the Mann-Whitney U test. Diagnostic accuracy was assessed by ROC-AUC analysis; sensitivity and specificity were assessed for each histogram parameter. ADC entropy was significantly higher in G2-3 tumours with ROC-AUC 0.757; sensitivity and specificity were 83.3 % (95 % CI: 61.2-94.5) and 61.1 % (95 % CI: 36.1-81.7). ADC kurtosis was higher in panNENs with vascular involvement, nodal and hepatic metastases (p= .008, .021 and .008; ROC-AUC= 0.820, 0.709 and 0.820); sensitivity and specificity were: 85.7/74.3 % (95 % CI: 42-99.2 /56.4-86.9), 36.8/96.5 % (95 % CI: 17.2-61.4 /76-99.8) and 100/62.8 % (95 % CI: 56.1-100/44.9-78.1). No significant differences between groups were found for other histogram-derived parameters (p >.05). Whole-tumour histogram analysis of ADC maps may be helpful in predicting tumour grade, vascular involvement, nodal and liver metastases in panNENs. ADC entropy and ADC kurtosis are the most accurate parameters for identification of panNENs with malignant behaviour. • Whole-tumour ADC histogram analysis can predict aggressiveness in pancreatic neuroendocrine neoplasms. • ADC entropy and kurtosis are higher in aggressive tumours. • ADC histogram analysis can quantify tumour diffusion heterogeneity. • Non-invasive quantification of tumour heterogeneity can provide adjunctive information for prognostication.
Configurational Entropy Approach to the Kinetics of Glasses
Di Marzio, Edmund A.; Yang, Arthur J. M.
1997-01-01
A kinetic theory of glasses is developed using equilibrium theory as a foundation. After establishing basic criteria for glass formation and the capability of the equilibrium entropy theory to describe the equilibrium aspects of glass formation, a minimal model for the glass kinetics is proposed. Our kinetic model is based on a trapping description of particle motion in which escapes from deep wells provide the rate-determining steps for motion. The formula derived for the zero frequency viscosity η (0,T) is log η (0,T) = B − AF(T)kT where F is the free energy and T the temperature. Contrast this to the Vogel-Fulcher law log η (0,T) = B + A/(T − Tc). A notable feature of our description is that even though the location of the equilibrium second-order transition in temperature-pressure space is given by the break in the entropy or volume curves the viscosity and its derivative are continuous through the transition. The new expression for η (0,T) has no singularity at a critical temperature Tc as in the Vogel-Fulcher law and the behavior reduces to the Arrhenius form in the glass region. Our formula for η (0,T) is discussed in the context of the concepts of strong and fragile glasses, and the experimentally observed connection of specific heat to relaxation response in a homologous series of polydimethylsiloxane is explained. The frequency and temperature dependencies of the complex viscosity η (ω< T), the diffusion coefficient D(ω< T), and the dielectric response ε (ω< T) are also obtained for our kinetic model and found to be consistent with stretched exponential behavior. PMID:27805133
Delayed plastic relaxation limit in SiGe islands grown by Ge diffusion from a local source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanacore, G. M.; Zani, M.; Tagliaferri, A., E-mail: alberto.tagliaferri@polimi.it
2015-03-14
The hetero-epitaxial strain relaxation in nano-scale systems plays a fundamental role in shaping their properties. Here, the elastic and plastic relaxation of self-assembled SiGe islands grown by surface-thermal-diffusion from a local Ge solid source on Si(100) are studied by atomic force and transmission electron microscopies, enabling the simultaneous investigation of the strain relaxation in different dynamical regimes. Islands grown by this technique remain dislocation-free and preserve a structural coherence with the substrate for a base width as large as 350 nm. The results indicate that a delay of the plastic relaxation is promoted by an enhanced Si-Ge intermixing, induced by themore » surface-thermal-diffusion, which takes place already in the SiGe overlayer before the formation of a critical nucleus. The local entropy of mixing dominates, leading the system toward a thermodynamic equilibrium, where non-dislocated, shallow islands with a low residual stress are energetically stable. These findings elucidate the role of the interface dynamics in modulating the lattice distortion at the nano-scale, and highlight the potential use of our growth strategy to create composition and strain-controlled nano-structures for new-generation devices.« less
Ponnusami, V; Vikram, S; Srivastava, S N
2008-03-21
Batch sorption experiments were carried out using a novel adsorbent, guava leaf powder (GLP), for the removal of methylene blue (MB) from aqueous solutions. Potential of GLP for adsorption of MB from aqueous solution was found to be excellent. Effects of process parameters pH, adsorbent dosage, concentration, particle size and temperature were studied. Temperature-concentration interaction effect on dye uptake was studied and a quadratic model was proposed to predict dye uptake in terms of concentration, time and temperature. The model conforms closely to the experimental data. The model was used to find optimum temperature and concentration that result in maximum dye uptake. Langmuir model represent the experimental data well. Maximum dye uptake was found to be 295mg/g, indicating that GLP can be used as an excellent low-cost adsorbent. Pseudo-first-order, pseudo-second order and intraparticle diffusion models were tested. From experimental data it was found that adsorption of MB onto GLP follow pseudo second order kinetics. External diffusion and intraparticle diffusion play roles in adsorption process. Free energy of adsorption (DeltaG degrees ), enthalpy change (DeltaH degrees ) and entropy change (DeltaS degrees ) were calculated to predict the nature of adsorption. Adsorption in packed bed was also evaluated.
2004 Army Research Office in Review
2004-01-01
23 Uncool Tunable LWIR Microbolometer...but also for speech in multimedia applications. ELECTRONICS Uncooled Tunable LWIR Microbolometer – Multi- or hyper- spectral images contain...Analysis of NURBS Curves and Surfaces Jian-Ao Lian, Prairie View A&M University The multiresolution structure of NURBS ( nonuniform rational B
THEMATIC ACCURACY OF MRLC LAND COVER FOR THE EASTERN UNITED STATES
One objective of the MultiResolution Land Characteristics (MRLC) consortium is to map general land-cover categories for the conterminous United States using Landsat Thematic Mapper (TM) data. Land-cover mapping and classification accuracy assessment are complete for the e...
THEMATIC ACCURACY ASSESSMENT OF REGIONAL SCALE LAND COVER DATA
The Multi-Resolution Land Characteristics (MRLC) consortium, a cooperative effort of several U .S. federal agencies, including. the U.S. Geological Survey (USGS) EROS Data Center (EDC) and the U.S. Environmental Protection Agency (EP A), have jointly conducted the National Land C...
US LAND-COVER MONITORING AND DETECTION OF CHANGES IN SCALE AND CONTEXT OF FOREST
Disparate land-cover mapping programs, previously focused solely on mission-oriented goals, have organized themselves as the Multi-Resolution Land Characteristics (MRLC) Consortium with a unified goal of producing land-cover nationwide at routine intervals. Under MRLC, United Sta...
Multiresolution Analysis by Infinitely Differentiable Compactly Supported Functions
1992-09-01
Math. Surveys 45:1 (1990), 87-120. [I] (;. Strang and G. Fix, A Fourier analysis of the finite element variational method. C.I.M.F. I 1 Ciclo 1971, in Constructi’c Aspects of Functional Analyszs ed. G. Geymonat 1973, 793-840. 10
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
OpenCL-based vicinity computation for 3D multiresolution mesh compression
NASA Astrophysics Data System (ADS)
Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri
2017-03-01
3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.
NASA Astrophysics Data System (ADS)
Campo, D.; Quintero, O. L.; Bastidas, M.
2016-04-01
We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-01-01
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146
Learning target masks in infrared linescan imagery
NASA Astrophysics Data System (ADS)
Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter
1997-04-01
In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Mather, Barry A
A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
On analysis of electroencephalogram by multiresolution-based energetic approach
NASA Astrophysics Data System (ADS)
Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer
2013-10-01
Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.
Multiresolution multiscale active mask segmentation of fluorescence microscope images
NASA Astrophysics Data System (ADS)
Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena
2009-08-01
We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.
Progressive simplification and transmission of building polygons based on triangle meshes
NASA Astrophysics Data System (ADS)
Li, Hongsheng; Wang, Yingjie; Guo, Qingsheng; Han, Jiafu
2010-11-01
Digital earth is a virtual representation of our planet and a data integration platform which aims at harnessing multisource, multi-resolution, multi-format spatial data. This paper introduces a research framework integrating progressive cartographic generalization and transmission of vector data. The progressive cartographic generalization provides multiple resolution data from coarse to fine as key scales and increments between them which is not available in traditional generalization framework. Based on the progressive simplification algorithm, the building polygons are triangulated into meshes and encoded according to the simplification sequence of two basic operations, edge collapse and vertex split. The map data at key scales and encoded increments between them are stored in a multi-resolution file. As the client submits requests to the server, the coarsest map is transmitted first and then the increments. After data decoding and mesh refinement the building polygons with more details will be visualized. Progressive generalization and transmission of building polygons is demonstrated in the paper.
Multiresolution texture analysis applied to road surface inspection
NASA Astrophysics Data System (ADS)
Paquis, Stephane; Legeay, Vincent; Konik, Hubert; Charrier, Jean
1999-03-01
Technological advances provide now the opportunity to automate the pavement distress assessment. This paper deals with an approach for achieving an automatic vision system for road surface classification. Road surfaces are composed of aggregates, which have a particular grain size distribution and a mortar matrix. From various physical properties and visual aspects, four road families are generated. We present here a tool using a pyramidal process with the assumption that regions or objects in an image rise up because of their uniform texture. Note that the aim is not to compute another statistical parameter but to include usual criteria in our method. In fact, the road surface classification uses a multiresolution cooccurrence matrix and a hierarchical process through an original intensity pyramid, where a father pixel takes the minimum gray level value of its directly linked children pixels. More precisely, only matrix diagonal is taken into account and analyzed along the pyramidal structure, which allows the classification to be made.
NASA Astrophysics Data System (ADS)
Maslova, I.; Ticlavilca, A. M.; McKee, M.
2012-12-01
There has been an increased interest in wavelet-based streamflow forecasting models in recent years. Often overlooked in this approach are the circularity assumptions of the wavelet transform. We propose a novel technique for minimizing the wavelet decomposition boundary condition effect to produce long-term, up to 12 months ahead, forecasts of streamflow. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data. A hybrid wavelet-multivariate relevance vector machine model is developed for forecasting the streamflow in real-time for Yellowstone River, Uinta Basin, Utah, USA. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model model accuracy can be increased by using the wavelet boundary rule introduced in this study. This long-term streamflow modeling and forecasting methodology would enable better decision-making and managing water availability risk.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
Zonal flow evolution and overstability in accretion discs
NASA Astrophysics Data System (ADS)
Vanon, R.; Ogilvie, G. I.
2017-04-01
This work presents a linear analytical calculation on the stability and evolution of a compressible, viscous self-gravitating (SG) Keplerian disc with both horizontal thermal diffusion and a constant cooling time-scale when an axisymmetric structure is present and freely evolving. The calculation makes use of the shearing sheet model and is carried out for a range of cooling times. Although the solutions to the inviscid problem with no cooling or diffusion are well known, it is non-trivial to predict the effect caused by the introduction of cooling and of small diffusivities; this work focuses on perturbations of intermediate wavelengths, therefore representing an extension to the classical stability analysis on thermal and viscous instabilities. For density wave modes, the analysis can be simplified by means of a regular perturbation analysis; considering both shear and thermal diffusivities, the system is found to be overstable for intermediate and long wavelengths for values of the Toomre parameter Q ≲ 2; a non-SG instability is also detected for wavelengths ≳18H, where H is the disc scale-height, as long as γ ≲ 1.305. The regular perturbation analysis does not, however, hold for the entropy and potential vorticity slow modes as their ideal growth rates are degenerate. To understand their evolution, equations for the axisymmetric structure's amplitudes in these two quantities are analytically derived and their instability regions obtained. The instability appears boosted by increasing the value of the adiabatic index and of the Prandtl number, while it is quenched by efficient cooling.
NASA Astrophysics Data System (ADS)
Mishra, V.; Cruise, J.; Mecikalski, J. R.
2017-12-01
Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the growing seasons from 2015-2017. Soil moisture profiles compared favorably to in situ data and simulated crop yields compared well with observed yields.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
Liu, Zhigang; Han, Zhiwei; Zhang, Yang; Zhang, Qiaoge
2014-11-01
Multiwavelets possess better properties than traditional wavelets. Multiwavelet packet transformation has more high-frequency information. Spectral entropy can be applied as an analysis index to the complexity or uncertainty of a signal. This paper tries to define four multiwavelet packet entropies to extract the features of different transmission line faults, and uses a radial basis function (RBF) neural network to recognize and classify 10 fault types of power transmission lines. First, the preprocessing and postprocessing problems of multiwavelets are presented. Shannon entropy and Tsallis entropy are introduced, and their difference is discussed. Second, multiwavelet packet energy entropy, time entropy, Shannon singular entropy, and Tsallis singular entropy are defined as the feature extraction methods of transmission line fault signals. Third, the plan of transmission line fault recognition using multiwavelet packet entropies and an RBF neural network is proposed. Finally, the experimental results show that the plan with the four multiwavelet packet energy entropies defined in this paper achieves better performance in fault recognition. The performance with SA4 (symmetric antisymmetric) multiwavelet packet Tsallis singular entropy is the best among the combinations of different multiwavelet packets and the four multiwavelet packet entropies.
Uniqueness and characterization theorems for generalized entropies
NASA Astrophysics Data System (ADS)
Enciso, Alberto; Tempesta, Piergiulio
2017-12-01
The requirement that an entropy function be composable is key: it means that the entropy of a compound system can be calculated in terms of the entropy of its independent components. We prove that, under mild regularity assumptions, the only composable generalized entropy in trace form is the Tsallis one-parameter family (which contains Boltzmann-Gibbs as a particular case). This result leads to the use of generalized entropies that are not of trace form, such as Rényi’s entropy, in the study of complex systems. In this direction, we also present a characterization theorem for a large class of composable non-trace-form entropy functions with features akin to those of Rényi’s entropy.
Fast diffusion of native defects and impurities in perovskite solar cell material CH 3NH 3PbI 3
Yang, Dongwen; Ming, Wenmei; Shi, Hongliang; ...
2016-06-01
CH 3NH 3PbI 3-based solar cells have shown remarkable progress in recent years but have also suffered from structural, electrical, and chemical instabilities related to the soft lattices and the chemistry of these halides. One of the instabilities is ion migration, which may cause current–voltage hysteresis in CH 3NH 3PbI 3 solar cells. Significant ion diffusion and ionic conductivity in CH 3NH 3PbI 3 have been reported; their nature, however, remain controversial. In the literature, the use of different experimental techniques leads to the observation of different diffusing ions (either iodine or CH 3NH 3 ion); the calculated diffusion barriersmore » for native defects scatter in a wide range; the calculated defect formation energies also differ qualitatively. These controversies hinder the understanding and the control of the ion migration in CH 3NH 3PbI 3. In this paper, we show density functional theory calculations of both the diffusion barriers and the formation energies for native defects (V I +, MA i +, V MA –, and I i –) and the Au impurity in CH 3NH 3PbI 3. V I + is found to be the dominant diffusing defect due to its low formation energy and the low diffusion barrier. I i – and MA i + also have low diffusion barriers but their formation energies are relatively high. The hopping rate of V I + is further calculated taking into account the contribution of the vibrational entropy, confirming V I + as a fast diffuser. We discuss approaches for managing defect population and migration and suggest that chemically modifying surfaces, interfaces, and grain boundaries may be effective in controlling the population of the iodine vacancy and the device polarization. We further show that the formation energy and the diffusion barrier of Au interstitial in CH 3NH 3PbI 3 are both low. As a result, it is thus possible that Au can diffuse into CH3NH3PbI3 under bias in devices (e.g., solar cell, photodetector) with Au/CH 3NH 3PbI 3 interfaces and modify the electronic properties of CH 3NH 3PbI 3.« less
Fast diffusion of native defects and impurities in perovskite solar cell material CH 3NH 3PbI 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Dongwen; Ming, Wenmei; Shi, Hongliang
CH 3NH 3PbI 3-based solar cells have shown remarkable progress in recent years but have also suffered from structural, electrical, and chemical instabilities related to the soft lattices and the chemistry of these halides. One of the instabilities is ion migration, which may cause current–voltage hysteresis in CH 3NH 3PbI 3 solar cells. Significant ion diffusion and ionic conductivity in CH 3NH 3PbI 3 have been reported; their nature, however, remain controversial. In the literature, the use of different experimental techniques leads to the observation of different diffusing ions (either iodine or CH 3NH 3 ion); the calculated diffusion barriersmore » for native defects scatter in a wide range; the calculated defect formation energies also differ qualitatively. These controversies hinder the understanding and the control of the ion migration in CH 3NH 3PbI 3. In this paper, we show density functional theory calculations of both the diffusion barriers and the formation energies for native defects (V I +, MA i +, V MA –, and I i –) and the Au impurity in CH 3NH 3PbI 3. V I + is found to be the dominant diffusing defect due to its low formation energy and the low diffusion barrier. I i – and MA i + also have low diffusion barriers but their formation energies are relatively high. The hopping rate of V I + is further calculated taking into account the contribution of the vibrational entropy, confirming V I + as a fast diffuser. We discuss approaches for managing defect population and migration and suggest that chemically modifying surfaces, interfaces, and grain boundaries may be effective in controlling the population of the iodine vacancy and the device polarization. We further show that the formation energy and the diffusion barrier of Au interstitial in CH 3NH 3PbI 3 are both low. As a result, it is thus possible that Au can diffuse into CH3NH3PbI3 under bias in devices (e.g., solar cell, photodetector) with Au/CH 3NH 3PbI 3 interfaces and modify the electronic properties of CH 3NH 3PbI 3.« less
NASA Astrophysics Data System (ADS)
Schliesser, Jacob M.; Huang, Baiyu; Sahu, Sulata K.; Asplund, Megan; Navrotsky, Alexandra; Woodfield, Brian F.
2018-03-01
We have measured the heat capacities of several well-characterized bulk and nanophase Fe3O4-Co3O4 and Fe3O4-Mn3O4 spinel solid solution samples from which magnetic properties of transitions and third-law entropies have been determined. The magnetic transitions show several features common to effects of particle and magnetic domain sizes. From the standard molar entropies, excess entropies of mixing have been generated for these solid solutions and compared with configurational entropies determined previously by assuming appropriate cation and valence distributions. The vibrational and magnetic excess entropies for bulk materials are comparable in magnitude to the respective configurational entropies indicating that excess entropies of mixing must be included when analyzing entropies of mixing. The excess entropies for nanophase materials are even larger than the configurational entropies. Changes in valence, cation distribution, bonding and microstructure between the mixing ions are the likely sources of the positive excess entropies of mixing.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
On the entropy variation in the scenario of entropic gravity
NASA Astrophysics Data System (ADS)
Xiao, Yong; Bai, Shi-Yang
2018-05-01
In the scenario of entropic gravity, entropy varies as a function of the location of the matter, while the tendency to increase entropy appears as gravity. We concentrate on studying the entropy variation of a typical gravitational system with different relative positions between the mass and the gravitational source. The result is that the entropy of the system doesn't increase when the mass is displaced closer to the gravitational source. In this way it disproves the proposal of entropic gravity from thermodynamic entropy. It doesn't exclude the possibility that gravity originates from non-thermodynamic entropy like entanglement entropy.
Solid T-spline Construction from Boundary Representations for Genus-Zero Geometry
2011-11-14
Engineering, accepted, 2011. [6] M. S. Floater . Parametrization and smooth approximation of surface triangulations. Com- puter Aided Geometric Design...14(3):231 – 250, 1997. [7] M. S. Floater and K. Hormann. Surface parameterization: a tutorial and survey. Advances in Multiresolution for Geometric
Completion of the National Land Cover Database (NLCD) 1992-2001 Land Cover Change Retrofit Product
The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods betwe...
Entropy and climate. I - ERBE observations of the entropy production of the earth
NASA Technical Reports Server (NTRS)
Stephens, G. L.; O'Brien, D. M.
1993-01-01
An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.
Logarithmic black hole entropy corrections and holographic Rényi entropy
NASA Astrophysics Data System (ADS)
Mahapatra, Subhash
2018-01-01
The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.
Transport Coefficients in weakly compressible turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Erlebacher, Gordon
1996-01-01
A theory of transport coefficients in weakly compressible turbulence is derived by applying Yoshizawa's two-scale direct interaction approximation to the compressible equations of motion linearized about a state of incompressible turbulence. The result is a generalization of the eddy viscosity representation of incompressible turbulence. In addition to the usual incompressible eddy viscosity, the calculation generates eddy diffusivities for entropy and pressure, and an effective bulk viscosity acting on the mean flow. The compressible fluctuations also generate an effective turbulent mean pressure and corrections to the speed of sound. Finally, a prediction unique to Yoshizawa's two-scale approximation is that terms containing gradients of incompressible turbulence quantities also appear in the mean flow equations. The form these terms take is described.
An X-ray image of the violent interstellar medium in 30 Doradus
NASA Technical Reports Server (NTRS)
Wang, Q.; Helfand, D. J.
1991-01-01
A detailed analysis of the X-ray emission from the largest H II region complex in the Local Group, 30 Dor, is presented. Applying a new maximum entropy deconvolution algorithm to the Einstein Observatory data, reveals striking correlations among the X-ray, radio, and optical morphologies of the region, with X-ray-emitting bubbles filling cavities surrounded by H-alpha shells and coextensive diffuse X-ray and radio continuum emission from throughout the region. The total X-ray luminosity in the 0.16-3.5 keV band from an area within 160 pc of the central cluster R136 is about 2 x 10 to the 37th ergs/sec.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie
2017-04-27
Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.
Badawi, A M; Derbala, A S; Youssef, A M
1999-08-01
Computerized ultrasound tissue characterization has become an objective means for diagnosis of liver diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases are rather confusing and highly dependent upon the sonographer's experience. This often causes a bias effects in the diagnostic procedure and limits its objectivity and reproducibility. Computerized tissue characterization to assist quantitatively the sonographer for the accurate differentiation and to minimize the degree of risk is thus justified. Fuzzy logic has emerged as one of the most active area in classification. In this paper, we present an approach that employs Fuzzy reasoning techniques to automatically differentiate diffuse liver diseases using numerical quantitative features measured from the ultrasound images. Fuzzy rules were generated from over 140 cases consisting of normal, fatty, and cirrhotic livers. The input to the fuzzy system is an eight dimensional vector of feature values: the mean gray level (MGL), the percentile 10%, the contrast (CON), the angular second moment (ASM), the entropy (ENT), the correlation (COR), the attenuation (ATTEN) and the speckle separation. The output of the fuzzy system is one of the three categories: cirrhosis, fatty or normal. The steps done for differentiating the pathologies are data acquisition and feature extraction, dividing the input spaces of the measured quantitative data into fuzzy sets. Based on the expert knowledge, the fuzzy rules are generated and applied using the fuzzy inference procedures to determine the pathology. Different membership functions are developed for the input spaces. This approach has resulted in very good sensitivities and specificity for classifying diffused liver pathologies. This classification technique can be used in the diagnostic process, together with the history information, laboratory, clinical and pathological examinations.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
Effect of entropy on anomalous transport in ITG-modes of magneto-plasma
NASA Astrophysics Data System (ADS)
Yaqub Khan, M.; Qaiser Manzoor, M.; Haq, A. ul; Iqbal, J.
2017-04-01
The ideal gas equation and S={{c}v}log ≤ft(P/ρ \\right) (where S is entropy, P is pressure and ρ is the mass density) define the interconnection of entropy with the temperature and density of plasma. Therefore, different phenomena relating to plasma and entropy need to be investigated. By employing the Braginskii transport equations for a nonuniform electron-ion magnetoplasma, two new parameters—the entropy distribution function and the entropy gradient drift—are defined, a new dispersion relation is obtained, and the dependence of anomalous transport on entropy is also proved. Some results, like monotonicity, the entropy principle and the second law of thermodynamics, are proved with a new definition of entropy. This work will open new horizons in fusion processes, not only by controlling entropy in tokamak plasmas—particularly in the pedestal regions of the H-mode and space plasmas—but also in engineering sciences.
Quantifying and minimizing entropy generation in AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1997-12-31
Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less
Thermodynamic and Differential Entropy under a Change of Variables
Hnizdo, Vladimir; Gilson, Michael K.
2013-01-01
The differential Shannon entropy of information theory can change under a change of variables (coordinates), but the thermodynamic entropy of a physical system must be invariant under such a change. This difference is puzzling, because the Shannon and Gibbs entropies have the same functional form. We show that a canonical change of variables can, indeed, alter the spatial component of the thermodynamic entropy just as it alters the differential Shannon entropy. However, there is also a momentum part of the entropy, which turns out to undergo an equal and opposite change when the coordinates are transformed, so that the total thermodynamic entropy remains invariant. We furthermore show how one may correctly write the change in total entropy for an isothermal physical process in any set of spatial coordinates. PMID:24436633
Entropy for Mechanically Vibrating Systems
NASA Astrophysics Data System (ADS)
Tufano, Dante
The research contained within this thesis deals with the subject of entropy as defined for and applied to mechanically vibrating systems. This work begins with an overview of entropy as it is understood in the fields of classical thermodynamics, information theory, statistical mechanics, and statistical vibroacoustics. Khinchin's definition of entropy, which is the primary definition used for the work contained in this thesis, is introduced in the context of vibroacoustic systems. The main goal of this research is to to establish a mathematical framework for the application of Khinchin's entropy in the field of statistical vibroacoustics by examining the entropy context of mechanically vibrating systems. The introduction of this thesis provides an overview of statistical energy analysis (SEA), a modeling approach to vibroacoustics that motivates this work on entropy. The objective of this thesis is given, and followed by a discussion of the intellectual merit of this work as well as a literature review of relevant material. Following the introduction, an entropy analysis of systems of coupled oscillators is performed utilizing Khinchin's definition of entropy. This analysis develops upon the mathematical theory relating to mixing entropy, which is generated by the coupling of vibroacoustic systems. The mixing entropy is shown to provide insight into the qualitative behavior of such systems. Additionally, it is shown that the entropy inequality property of Khinchin's entropy can be reduced to an equality using the mixing entropy concept. This equality can be interpreted as a facet of the second law of thermodynamics for vibroacoustic systems. Following this analysis, an investigation of continuous systems is performed using Khinchin's entropy. It is shown that entropy analyses using Khinchin's entropy are valid for continuous systems that can be decomposed into a finite number of modes. The results are shown to be analogous to those obtained for simple oscillators, which demonstrates the applicability of entropy-based approaches to real-world systems. Three systems are considered to demonstrate these findings: 1) a rod end-coupled to a simple oscillator, 2) two end-coupled rods, and 3) two end-coupled beams. The aforementioned work utilizes the weak coupling assumption to determine the entropy of composite systems. Following this discussion, a direct method of finding entropy is developed which does not rely on this limiting assumption. The resulting entropy provides a useful benchmark for evaluating the accuracy of the weak coupling approach, and is validated using systems of coupled oscillators. The later chapters of this work discuss Khinchin's entropy as applied to nonlinear and nonconservative systems, respectively. The discussion of entropy for nonlinear systems is motivated by the desire to expand the applicability of SEA techniques beyond the linear regime. The discussion of nonconservative systems is also crucial, since real-world systems interact with their environment, and it is necessary to confirm the validity of an entropy approach for systems that are relevant in the context of SEA. Having developed a mathematical framework for determining entropy under a number of previously unexplored cases, the relationship between thermodynamics and statistical vibroacoustics can be better understood. Specifically, vibroacoustic temperatures can be obtained for systems that are not necessarily linear or weakly coupled. In this way, entropy provides insight into how the power flow proportionality of statistical energy analysis (SEA) can be applied to a broader class of vibroacoustic systems. As such, entropy is a useful tool for both justifying and expanding the foundational results of SEA.
Entropy is more resistant to artifacts than bispectral index in brain-dead organ donors.
Wennervirta, Johanna; Salmi, Tapani; Hynynen, Markku; Yli-Hankala, Arvi; Koivusalo, Anna-Maria; Van Gils, Mark; Pöyhiä, Reino; Vakkuri, Anne
2007-01-01
To evaluate the usefulness of entropy and the bispectral index (BIS) in brain-dead subjects. A prospective, open, nonselective, observational study in the university hospital. 16 brain-dead organ donors. Time-domain electroencephalography (EEG), spectral entropy of the EEG, and BIS were recorded during solid organ harvest. State entropy differed significantly from 0 (isoelectric EEG) 28%, response entropy 29%, and BIS 68% of the total recorded time. The median values during the operation were state entropy 0.0, response entropy 0.0, and BIS 3.0. In four of 16 organ donors studied the EEG was not isoelectric, and nonreactive rhythmic activity was noted in time-domain EEG. After excluding the results from subjects with persistent residual EEG activity state entropy, response entropy, and BIS values differed from zero 17%, 18%, and 62% of the recorded time, respectively. Median values were 0.0, 0.0, and 2.0 for state entropy, response entropy, and BIS, respectively. The highest index values in entropy and BIS monitoring were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, 50-Hz artifact, handling of the donor, ballistocardiography, electromyography, and electrocardiography. Both entropy and BIS showed nonzero values due to artifacts after brain death diagnosis. BIS was more liable to artifacts than entropy. Neither of these indices are diagnostic tools, and care should be taken when interpreting EEG and EEG-derived indices in the evaluation of brain death.
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Yamaoka, S.; Sueoka, K.; Vanhellemont, J.
2017-09-01
It is well known that p-type, neutral and n-type dopants affect the intrinsic point defect (vacancy V and self-interstitial I) behavior in single crystal Si. By the interaction with V and/or I, (1) growing Si crystals become more V- or I-rich, (2) oxygen precipitation is enhanced or retarded, and (3) dopant diffusion is enhanced or retarded, depending on the type and concentration of dopant atoms. Since these interactions affect a wide range of Si properties ranging from as-grown crystal quality to LSI performance, numerical simulations are used to predict and to control the behavior of both dopant atoms and intrinsic point defects. In most cases, the thermal equilibrium concentrations of dopant-point defect pairs are evaluated using the mass action law by taking only the binding energy of closest pair to each other into account. The impacts of dopant atoms on the formation of V and I more distant than 1st neighbor and on the change of formation entropy are usually neglected. In this study, we have evaluated the thermal equilibrium concentrations of intrinsic point defects in heavily doped Si crystals. Density functional theory (DFT) calculations were performed to obtain the formation energy (Ef) of the uncharged V and I at all sites in a 64-atom supercell around a substitutional p-type (B, Ga, In, and Tl), neutral (C, Ge, and Sn) and n-type (P, As, and Sb) dopant atom. The formation (vibration) entropies (Sf) of free I, V and I, V at 1st neighboring site from B, C, Sn, P and As atoms were also calculated with the linear response method. The dependences of the thermal equilibrium concentrations of trapped and total intrinsic point defects (sum of free I or V and I or V trapped with dopant atoms) on the concentrations of B, C, Sn, P and As in Si were obtained. Furthermore, the present evaluations well explain the experimental results of the so-called ;Voronkov criterion; in B and C doped Si, and also the observed dopant dependent void sizes in P and As doped Si crystals. The expressions obtained in the present work are very useful for the numerical simulation of grown-in defect behavior, oxygen precipitation and dopant diffusion in heavily doped Si. DFT calculations also showed that Coulomb interaction reaches approximately 30 Å from p (n)-type dopant atoms to I (V) in Si.
MULTI-RESOLUTION LAND CHARACTERISTICS FOR THE MID-ATLANTIC INTEGRATED ASSESMENT (MAIA) STUDY AREA
This data set is a Geographic Information System (GIS) coverage of the land use and land cover for the United States Environmental Protection Agency (USEPA) Mid-Atlantic Integrated Assessment (MAIA) Project region. The coverage was produced using 1988, 1989, 1991,1992, and 1993...
Completion of the 2006 National Land Cover Database Update for the Conterminous United States
Under the organization of the Multi-Resolution Land Characteristics (MRLC) Consortium, the National Land Cover Database (NLCD) has been updated to characterize both land cover and land cover change from 2001 to 2006. An updated version of NLCD 2001 (Version 2.0) is also provided....
SAMPLE SELECTION OF MRLC'S NLCD LAND COVER DATA FOR THEMATIC ACCURACY ASSESSMENT
The Multi-Resolution Land Characteristics (MRLC) consortium was formed in the early 1990s to cost- effectively acquire Landsat TM satellite data for the conterminous United States. One of the MRLC's objectives was to develop national land-cover data (NLCD) for the conterminous Un...
Low-Latency Embedded Vision Processor (LLEVS)
2016-03-01
26 3.2.3 Task 3 Projected Performance Analysis of FPGA- based Vision Processor ........... 31 3.2.3.1 Algorithms Latency Analysis ...Programmable Gate Array Custom Hardware for Real- Time Multiresolution Analysis . ............................................... 35...conduct data analysis for performance projections. The data acquired through measurements , simulation and estimation provide the requisite platform for
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Solving Large Problems with a Small Working Memory
ERIC Educational Resources Information Center
Pizlo, Zygmunt; Stefanov, Emil
2013-01-01
We describe an important elaboration of our multiscale/multiresolution model for solving the Traveling Salesman Problem (TSP). Our previous model emulated the non-uniform distribution of receptors on the human retina and the shifts of visual attention. This model produced near-optimal solutions of TSP in linear time by performing hierarchical…
USDA-ARS?s Scientific Manuscript database
In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...
Representing and computing regular languages on massively parallel networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M.I.; O'Sullivan, J.A.; Boysam, B.
1991-01-01
This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less
NASA Astrophysics Data System (ADS)
Li, Guanchen; von Spakovsky, Michael R.; Shen, Fengyu; Lu, Kathy
2018-01-01
Oxygen reduction in a solid oxide fuel cell cathode involves a nonequilibrium process of coupled mass and heat diffusion and electrochemical and chemical reactions. These phenomena occur at multiple temporal and spatial scales, making the modeling, especially in the transient regime, very difficult. Nonetheless, multiscale models are needed to improve the understanding of oxygen reduction and guide cathode design. Of particular importance for long-term operation are microstructure degradation and chromium oxide poisoning both of which degrade cathode performance. Existing methods are phenomenological or empirical in nature and their application limited to the continuum realm with quantum effects not captured. In contrast, steepest-entropy-ascent quantum thermodynamics can be used to model nonequilibrium processes (even those far-from equilibrium) at all scales. The nonequilibrium relaxation is characterized by entropy generation, which can unify coupled phenomena into one framework to model transient and steady behavior. The results reveal the effects on performance of the different timescales of the varied phenomena involved and their coupling. Results are included here for the effects of chromium oxide concentrations on cathode output as is a parametric study of the effects of interconnect-three-phase-boundary length, oxygen mean free path, and adsorption site effectiveness. A qualitative comparison with experimental results is made.
NASA Astrophysics Data System (ADS)
Jeon, Wonju; Lee, Sang-Hee
2012-12-01
In our previous study, we defined the branch length similarity (BLS) entropy for a simple network consisting of a single node and numerous branches. As the first application of this entropy to characterize shapes, the BLS entropy profiles of 20 battle tank shapes were calculated from simple networks created by connecting pixels in the boundary of the shape. The profiles successfully characterized the tank shapes through a comparison of their BLS entropy profiles. Following the application, this entropy was used to characterize human's emotional faces, such as happiness and sad, and to measure the degree of complexity for termite tunnel networks. These applications indirectly indicate that the BLS entropy profile can be a useful tool to characterize networks and shapes. However, the ability of the BLS entropy in the characterization depends on the image resolution because the entropy is determined by the number of nodes for the boundary of a shape. Higher resolution means more nodes. If the entropy is to be widely used in the scientific community, the effect of the resolution on the entropy profile should be understood. In the present study, we mathematically investigated the BLS entropy profile of a shape with infinite resolution and numerically investigated the variation in the pattern of the entropy profile caused by changes in the resolution change in the case of finite resolution.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Entropy generation of nanofluid flow in a microchannel heat sink
NASA Astrophysics Data System (ADS)
Manay, Eyuphan; Akyürek, Eda Feyza; Sahin, Bayram
2018-06-01
Present study aims to investigate the effects of the presence of nano sized TiO2 particles in the base fluid on entropy generation rate in a microchannel heat sink. Pure water was chosen as base fluid, and TiO2 particles were suspended into the pure water in five different particle volume fractions of 0.25%, 0.5%, 1.0%, 1.5% and 2.0%. Under laminar, steady state flow and constant heat flux boundary conditions, thermal, frictional, total entropy generation rates and entropy generation number ratios of nanofluids were experimentally analyzed in microchannel flow for different channel heights of 200 μm, 300 μm, 400 μm and 500 μm. It was observed that frictional and total entropy generation rates increased as thermal entropy generation rate were decreasing with an increase in particle volume fraction. In microchannel flows, thermal entropy generation could be neglected due to its too low rate smaller than 1.10e-07 in total entropy generation. Higher channel heights caused higher thermal entropy generation rates, and increasing channel height yielded an increase from 30% to 52% in thermal entropy generation. When channel height decreased, an increase of 66%-98% in frictional entropy generation was obtained. Adding TiO2 nanoparticles into the base fluid caused thermal entropy generation to decrease about 1.8%-32.4%, frictional entropy generation to increase about 3.3%-21.6%.
NASA Astrophysics Data System (ADS)
Guo, Ran
2018-04-01
In this paper, we investigate the definition of the entropy in the Fokker–Planck equation under the generalized fluctuation–dissipation relation (FDR), which describes a Brownian particle moving in a complex medium with friction and multiplicative noise. The friction and the noise are related by the generalized FDR. The entropy for such a system is defined first. According to the definition of the entropy, we calculate the entropy production and the entropy flux. Lastly, we make a numerical calculation to display the results in figures.
Single water entropy: hydrophobic crossover and application to drug binding.
Sasikala, Wilbee D; Mukherjee, Arnab
2014-09-11
Entropy of water plays an important role in both chemical and biological processes e.g. hydrophobic effect, molecular recognition etc. Here we use a new approach to calculate translational and rotational entropy of the individual water molecules around different hydrophobic and charged solutes. We show that for small hydrophobic solutes, the translational and rotational entropies of each water molecule increase as a function of its distance from the solute reaching finally to a constant bulk value. As the size of the solute increases (0.746 nm), the behavior of the translational entropy is opposite; water molecules closest to the solute have higher entropy that reduces with distance from the solute. This indicates that there is a crossover in translational entropy of water molecules around hydrophobic solutes from negative to positive values as the size of the solute is increased. Rotational entropy of water molecules around hydrophobic solutes for all sizes increases with distance from the solute, indicating the absence of crossover in rotational entropy. This makes the crossover in total entropy (translation + rotation) of water molecule happen at much larger size (>1.5 nm) for hydrophobic solutes. Translational entropy of single water molecule scales logarithmically (Str(QH) = C + kB ln V), with the volume V obtained from the ellipsoid of inertia. We further discuss the origin of higher entropy of water around water and show the possibility of recovering the entropy loss of some hypothetical solutes. The results obtained are helpful to understand water entropy behavior around various hydrophobic and charged environments within biomolecules. Finally, we show how our approach can be used to calculate the entropy of the individual water molecules in a protein cavity that may be replaced during ligand binding.
RNA Thermodynamic Structural Entropy
Garcia-Martin, Juan Antonio; Clote, Peter
2015-01-01
Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner’99 and Turner’04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs. PMID:26555444
RNA Thermodynamic Structural Entropy.
Garcia-Martin, Juan Antonio; Clote, Peter
2015-01-01
Conformational entropy for atomic-level, three dimensional biomolecules is known experimentally to play an important role in protein-ligand discrimination, yet reliable computation of entropy remains a difficult problem. Here we describe the first two accurate and efficient algorithms to compute the conformational entropy for RNA secondary structures, with respect to the Turner energy model, where free energy parameters are determined from UV absorption experiments. An algorithm to compute the derivational entropy for RNA secondary structures had previously been introduced, using stochastic context free grammars (SCFGs). However, the numerical value of derivational entropy depends heavily on the chosen context free grammar and on the training set used to estimate rule probabilities. Using data from the Rfam database, we determine that both of our thermodynamic methods, which agree in numerical value, are substantially faster than the SCFG method. Thermodynamic structural entropy is much smaller than derivational entropy, and the correlation between length-normalized thermodynamic entropy and derivational entropy is moderately weak to poor. In applications, we plot the structural entropy as a function of temperature for known thermoswitches, such as the repression of heat shock gene expression (ROSE) element, we determine that the correlation between hammerhead ribozyme cleavage activity and total free energy is improved by including an additional free energy term arising from conformational entropy, and we plot the structural entropy of windows of the HIV-1 genome. Our software RNAentropy can compute structural entropy for any user-specified temperature, and supports both the Turner'99 and Turner'04 energy parameters. It follows that RNAentropy is state-of-the-art software to compute RNA secondary structure conformational entropy. Source code is available at https://github.com/clotelab/RNAentropy/; a full web server is available at http://bioinformatics.bc.edu/clotelab/RNAentropy, including source code and ancillary programs.
Relating different quantum generalizations of the conditional Rényi entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomamichel, Marco; School of Physics, The University of Sydney, Sydney 2006; Berta, Mario
2014-08-15
Recently a new quantum generalization of the Rényi divergence and the corresponding conditional Rényi entropies was proposed. Here, we report on a surprising relation between conditional Rényi entropies based on this new generalization and conditional Rényi entropies based on the quantum relative Rényi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Rényi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Rényi entropies and derive a new entropicmore » uncertainty relation.« less
Exact analytical thermodynamic expressions for a Brownian heat engine
NASA Astrophysics Data System (ADS)
Taye, Mesfin Asfaw
2015-09-01
The nonequilibrium thermodynamics feature of a Brownian motor operating between two different heat baths is explored as a function of time t . Using the Gibbs entropy and Schnakenberg microscopic stochastic approach, we find exact closed form expressions for the free energy, the rate of entropy production, and the rate of entropy flow from the system to the outside. We show that when the system is out of equilibrium, it constantly produces entropy and at the same time extracts entropy out of the system. Its entropy production and extraction rates decrease in time and saturate to a constant value. In the long time limit, the rate of entropy production balances the rate of entropy extraction, and at equilibrium both entropy production and extraction rates become zero. Furthermore, via the present model, many thermodynamic theories can be checked.