2014-04-01
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael
Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes
NASA Astrophysics Data System (ADS)
Mitra, Sumit
With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
The Multiscale Robin Coupled Method for flows in porous media
NASA Astrophysics Data System (ADS)
Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.
2018-02-01
A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121
2012-09-15
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less
2013-06-24
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Hoist, Submitted for publication . • Finite...1996. [20] C. LANCZOS, Linear Differential Operators, Dover Publications , Mineola, NY, 1997. [21] G.I. MARCHUK, Adjoint Equations and Analysis of...NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection of information is
NASA Astrophysics Data System (ADS)
Laleian, A.; Valocchi, A. J.; Werth, C. J.
2017-12-01
Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.
Multi-scale Methods in Quantum Field Theory
NASA Astrophysics Data System (ADS)
Polyzou, W. N.; Michlin, Tracie; Bulut, Fatih
2018-05-01
Daubechies wavelets are used to make an exact multi-scale decomposition of quantum fields. For reactions that involve a finite energy that take place in a finite volume, the number of relevant quantum mechanical degrees of freedom is finite. The wavelet decomposition has natural resolution and volume truncations that can be used to isolate the relevant degrees of freedom. The application of flow equation methods to construct effective theories that decouple coarse and fine scale degrees of freedom is examined.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-01-01
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods. PMID:29690526
Yasir, Muhammad Naveed; Koh, Bong-Hwan
2018-04-21
This paper presents the local mean decomposition (LMD) integrated with multi-scale permutation entropy (MPE), also known as LMD-MPE, to investigate the rolling element bearing (REB) fault diagnosis from measured vibration signals. First, the LMD decomposed the vibration data or acceleration measurement into separate product functions that are composed of both amplitude and frequency modulation. MPE then calculated the statistical permutation entropy from the product functions to extract the nonlinear features to assess and classify the condition of the healthy and damaged REB system. The comparative experimental results of the conventional LMD-based multi-scale entropy and MPE were presented to verify the authenticity of the proposed technique. The study found that LMD-MPE’s integrated approach provides reliable, damage-sensitive features when analyzing the bearing condition. The results of REB experimental datasets show that the proposed approach yields more vigorous outcomes than existing methods.
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
3-D discrete shearlet transform and video processing.
Negi, Pooran Singh; Labate, Demetrio
2012-06-01
In this paper, we introduce a digital implementation of the 3-D shearlet transform and illustrate its application to problems of video denoising and enhancement. The shearlet representation is a multiscale pyramid of well-localized waveforms defined at various locations and orientations, which was introduced to overcome the limitations of traditional multiscale systems in dealing with multidimensional data. While the shearlet approach shares the general philosophy of curvelets and surfacelets, it is based on a very different mathematical framework, which is derived from the theory of affine systems and uses shearing matrices rather than rotations. This allows a natural transition from the continuous setting to the digital setting and a more flexible mathematical structure. The 3-D digital shearlet transform algorithm presented in this paper consists in a cascade of a multiscale decomposition and a directional filtering stage. The filters employed in this decomposition are implemented as finite-length filters, and this ensures that the transform is local and numerically efficient. To illustrate its performance, the 3-D discrete shearlet transform is applied to problems of video denoising and enhancement, and compared against other state-of-the-art multiscale techniques, including curvelets and surfacelets.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
NASA Astrophysics Data System (ADS)
Adarsh, S.; Reddy, M. Janga
2017-07-01
In this paper, the Hilbert-Huang transform (HHT) approach is used for the multiscale characterization of All India Summer Monsoon Rainfall (AISMR) time series and monsoon rainfall time series from five homogeneous regions in India. The study employs the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) for multiscale decomposition of monsoon rainfall in India and uses the Normalized Hilbert Transform and Direct Quadrature (NHT-DQ) scheme for the time-frequency characterization. The cross-correlation analysis between orthogonal modes of All India monthly monsoon rainfall time series and that of five climate indices such as Quasi Biennial Oscillation (QBO), El Niño Southern Oscillation (ENSO), Sunspot Number (SN), Atlantic Multi Decadal Oscillation (AMO), and Equatorial Indian Ocean Oscillation (EQUINOO) in the time domain showed that the links of different climate indices with monsoon rainfall are expressed well only for few low-frequency modes and for the trend component. Furthermore, this paper investigated the hydro-climatic teleconnection of ISMR in multiple time scales using the HHT-based running correlation analysis technique called time-dependent intrinsic correlation (TDIC). The results showed that both the strength and nature of association between different climate indices and ISMR vary with time scale. Stemming from this finding, a methodology employing Multivariate extension of EMD and Stepwise Linear Regression (MEMD-SLR) is proposed for prediction of monsoon rainfall in India. The proposed MEMD-SLR method clearly exhibited superior performance over the IMD operational forecast, M5 Model Tree (MT), and multiple linear regression methods in ISMR predictions and displayed excellent predictive skill during 1989-2012 including the four extreme events that have occurred during this period.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Gao, Yi
2017-03-01
This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...
2017-11-27
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less
Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu
2014-07-01
The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.
Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application
2016-02-26
Karniadakis, “Resilient algorithms for reconstructing and simulating gappy flow fields in CFD ”, Fluid Dynamic Research, vol. 47, 051402, 2015. 2. Y. Yu, H...simulation, domain decomposition, CFD , gappy data, estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...objective of this project was to develop a general CFD framework for multifidelity simula- tions to target multiscale problems but also resilience in
Guo, Bin; Chen, Zhongsheng; Guo, Jinyun; Liu, Feng; Chen, Chuanfa; Liu, Kangli
2016-01-01
Changes in precipitation could have crucial influences on the regional water resources in arid regions such as Xinjiang. It is necessary to understand the intrinsic multi-scale variations of precipitation in different parts of Xinjiang in the context of climate change. In this study, based on precipitation data from 53 meteorological stations in Xinjiang during 1960–2012, we investigated the intrinsic multi-scale characteristics of precipitation variability using an adaptive method named ensemble empirical mode decomposition (EEMD). Obvious non-linear upward trends in precipitation were found in the north, south, east and the entire Xinjiang. Changes in precipitation in Xinjiang exhibited significant inter-annual scale (quasi-2 and quasi-6 years) and inter-decadal scale (quasi-12 and quasi-23 years). Moreover, the 2–3-year quasi-periodic fluctuation was dominant in regional precipitation and the inter-annual variation had a considerable effect on the regional-scale precipitation variation in Xinjiang. We also found that there were distinctive spatial differences in variation trends and turning points of precipitation in Xinjiang. The results of this study indicated that compared to traditional decomposition methods, the EEMD method, without using any a priori determined basis functions, could effectively extract the reliable multi-scale fluctuations and reveal the intrinsic oscillation properties of climate elements. PMID:27007388
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Duan, Zhipeng; Huang, Jing
2018-06-01
With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A
2015-07-01
Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions. Copyright © 2015. Published by Elsevier B.V.
Multiscale techniques for parabolic equations.
Målqvist, Axel; Persson, Anna
2018-01-01
We use the local orthogonal decomposition technique introduced in Målqvist and Peterseim (Math Comput 83(290):2583-2603, 2014) to derive a generalized finite element method for linear and semilinear parabolic equations with spatial multiscale coefficients. We consider nonsmooth initial data and a backward Euler scheme for the temporal discretization. Optimal order convergence rate, depending only on the contrast, but not on the variations of the coefficients, is proven in the [Formula: see text]-norm. We present numerical examples, which confirm our theoretical findings.
A multiscale decomposition approach to detect abnormal vasculature in the optic disc.
Agurto, Carla; Yu, Honggang; Murray, Victor; Pattichis, Marios S; Nemeth, Sheila; Barriga, Simon; Soliz, Peter
2015-07-01
This paper presents a multiscale method to detect neovascularization in the optic disc (NVD) using fundus images. Our method is applied to a manually selected region of interest (ROI) containing the optic disc. All the vessels in the ROI are segmented by adaptively combining contrast enhancement methods with a vessel segmentation technique. Textural features extracted using multiscale amplitude-modulation frequency-modulation, morphological granulometry, and fractal dimension are used. A linear SVM is used to perform the classification, which is tested by means of 10-fold cross-validation. The performance is evaluated using 300 images achieving an AUC of 0.93 with maximum accuracy of 88%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Study on the relevance of some of the description methods for plateau-honed surfaces
NASA Astrophysics Data System (ADS)
Yousfi, M.; Mezghani, S.; Demirci, I.; El Mansori, M.
2014-01-01
Much work has been undertaken in recent years into the determination of a complete parametric description of plateau-honed surfaces with the intention of making a link between the process conditions, the surface topography and the required functional performances. Different advanced techniques (plateau/valleys decomposition using the normalized Abbott-Firestone curve or morphological operators, multiscale decomposition using continuous wavelets transform, etc) were proposed and applied in different studies. This paper re-examines the current state of developments and addresses a discussion on the relevance of the different proposed parameters and characterization methods for plateau-honed surfaces by considering the control loop manufacturing-characterization-function. The relevance of appropriate characterization is demonstrated through two experimental studies. They consider the effect of the most plateau honing process variables (the abrasive grit size and abrasive indentation velocity in finish-honing and the plateau-honing stage duration and pressure) on cylinder liner surface textures and hydrodynamic friction of the ring-pack system.
Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P
2016-04-13
An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis
NASA Astrophysics Data System (ADS)
Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.
2016-12-01
Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.
Liu, Quan; Chen, Yi-Feng; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2017-08-01
Electroencephalography (EEG) has been widely utilized to measure the depth of anaesthesia (DOA) during operation. However, the EEG signals are usually contaminated by artifacts which have a consequence on the measured DOA accuracy. In this study, an effective and useful filtering algorithm based on multivariate empirical mode decomposition and multiscale entropy (MSE) is proposed to measure DOA. Mean entropy of MSE is used as an index to find artifacts-free intrinsic mode functions. The effect of different levels of artifacts on the performances of the proposed filtering is analysed using simulated data. Furthermore, 21 patients' EEG signals are collected and analysed using sample entropy to calculate the complexity for monitoring DOA. The correlation coefficients of entropy and bispectral index (BIS) results show 0.14 ± 0.30 and 0.63 ± 0.09 before and after filtering, respectively. Artificial neural network (ANN) model is used for range mapping in order to correlate the measurements with BIS. The ANN method results show strong correlation coefficient (0.75 ± 0.08). The results in this paper verify that entropy values and BIS have a strong correlation for the purpose of DOA monitoring and the proposed filtering method can effectively filter artifacts from EEG signals. The proposed method performs better than the commonly used wavelet denoising method. This study provides a fully adaptive and automated filter for EEG to measure DOA more accuracy and thus reduce risk related to maintenance of anaesthetic agents.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network
NASA Astrophysics Data System (ADS)
Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu
2018-03-01
The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.
Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo
2013-01-01
One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.
NASA Astrophysics Data System (ADS)
Jolivet, S.; Mezghani, S.; El Mansori, M.
2016-09-01
The replication of topography has been generally restricted to optimizing material processing technologies in terms of statistical and single-scale features such as roughness. By contrast, manufactured surface topography is highly complex, irregular, and multiscale. In this work, we have demonstrated the use of multiscale analysis on replicates of surface finish to assess the precise control of the finished replica. Five commercial resins used for surface replication were compared. The topography of five standard surfaces representative of common finishing processes were acquired both directly and by a replication technique. Then, they were characterized using the ISO 25178 standard and multiscale decomposition based on a continuous wavelet transform, to compare the roughness transfer quality at different scales. Additionally, atomic force microscope force modulation mode was used in order to compare the resins’ stiffness properties. The results showed that less stiff resins are able to replicate the surface finish along a larger wavelength band. The method was then tested for non-destructive quality control of automotive gear tooth surfaces.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Multiscale synchrony behaviors of paired financial time series by 3D multi-continuum percolation
NASA Astrophysics Data System (ADS)
Wang, M.; Wang, J.; Wang, B. T.
2018-02-01
Multiscale synchrony behaviors and nonlinear dynamics of paired financial time series are investigated, in an attempt to study the cross correlation relationships between two stock markets. A random stock price model is developed by a new system called three-dimensional (3D) multi-continuum percolation system, which is utilized to imitate the formation mechanism of price dynamics and explain the nonlinear behaviors found in financial time series. We assume that the price fluctuations are caused by the spread of investment information. The cluster of 3D multi-continuum percolation represents the cluster of investors who share the same investment attitude. In this paper, we focus on the paired return series, the paired volatility series, and the paired intrinsic mode functions which are decomposed by empirical mode decomposition. A new cross recurrence quantification analysis is put forward, combining with multiscale cross-sample entropy, to investigate the multiscale synchrony of these paired series from the proposed model. The corresponding research is also carried out for two China stock markets as comparison.
Adaptive multiscale processing for contrast enhancement
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.
1993-07-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Fast Decentralized Averaging via Multi-scale Gossip
NASA Astrophysics Data System (ADS)
Tsianos, Konstantinos I.; Rabbat, Michael G.
We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.
NASA Astrophysics Data System (ADS)
Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.
2018-01-01
In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring
Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Objective Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Methods Six MSPE algorithms—derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis—were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. Results CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. Conclusions MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales. PMID:27723803
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.
Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales.
NASA Astrophysics Data System (ADS)
Gu, Rongbao; Shao, Yanmin
2016-07-01
In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.
NASA Astrophysics Data System (ADS)
Li, Changbo; Wang, Liangshu; Sun, Bin; Feng, Runhai; Wu, Yongjing
2015-09-01
In this paper, we introduce the method of Wavelet Multi-scale Decomposition (WMD) combined with Power Spectrum Analysis (PSA) for the separation of regional gravity and magnetic anomalies. The Songliao Basin is situated between the Siberian Plate and the North China Plate, and its main structural trend of gravity and magnetic anomaly fields is NNE. The study area shows a significant feature of deep collage-type construction. According to the feature of gravity field, the region was divided into five sub-regions. The gravity and magnetic fields of the Songliao Basin were separated using WMD with a 4th order separation. The apparent depth of anomalies in each order was determined by Logarithmic PSA. Then, the shallow high-frequency anomalies were removed and the 2nd-4th order wavelet detail anomalies were used to study the basin's major faults. Twenty-six faults within the basement were recognized. The 4th order wavelet approximate anomalies were used for the inversion of the Moho discontinuity and the Curie isothermal surface.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Min; Wei, Jian-Gong; Peng, Zhen; Sheng, Xin-Qing
2012-02-01
The interpolative decomposition (ID) is combined with the multilevel fast multipole algorithm (MLFMA), denoted by ID-MLFMA, to handle multiscale problems. The ID-MLFMA first generates ID levels by recursively dividing the boxes at the finest MLFMA level into smaller boxes. It is specifically shown that near-field interactions with respect to the MLFMA, in the form of the matrix vector multiplication (MVM), are efficiently approximated at the ID levels. Meanwhile, computations on far-field interactions at the MLFMA levels remain unchanged. Only a small portion of matrix entries are required to approximate coupling among well-separated boxes at the ID levels, and these submatrices can be filled without computing the complete original coupling matrix. It follows that the matrix filling in the ID-MLFMA becomes much less expensive. The memory consumed is thus greatly reduced and the MVM is accelerated as well. Several factors that may influence the accuracy, efficiency and reliability of the proposed ID-MLFMA are investigated by numerical experiments. Complex targets are calculated to demonstrate the capability of the ID-MLFMA algorithm.
A multiscale method for a robust detection of the default mode network
NASA Astrophysics Data System (ADS)
Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo
2013-11-01
The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.
NASA Astrophysics Data System (ADS)
Cheng, Boyang; Jin, Longxu; Li, Guoning
2018-06-01
Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Information-Theoretical Quantifier of Brain Rhythm Based on Data-Driven Multiscale Representation
2015-01-01
This paper presents a data-driven multiscale entropy measure to reveal the scale dependent information quantity of electroencephalogram (EEG) recordings. This work is motivated by the previous observations on the nonlinear and nonstationary nature of EEG over multiple time scales. Here, a new framework of entropy measures considering changing dynamics over multiple oscillatory scales is presented. First, to deal with nonstationarity over multiple scales, EEG recording is decomposed by applying the empirical mode decomposition (EMD) which is known to be effective for extracting the constituent narrowband components without a predetermined basis. Following calculation of Renyi entropy of the probability distributions of the intrinsic mode functions extracted by EMD leads to a data-driven multiscale Renyi entropy. To validate the performance of the proposed entropy measure, actual EEG recordings from rats (n = 9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Simulation and experimental results demonstrate that the use of the multiscale Renyi entropy leads to better discriminative capability of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective diagnostic and prognostic tool. PMID:26380297
2012-09-01
on transformation field analysis [19], proper orthogonal decomposition [63], eigenstrains [23], and others [1, 29, 39] have brought significant...commercial finite element software (Abaqus) along with the user material subroutine utility ( UMAT ) is employed to solve these problems. In this section...Symmetric Coefficients TFA: Transformation Field Analysis UMAT : User Material Subroutine
Enhanced visualization of abnormalities in digital-mammographic images
NASA Astrophysics Data System (ADS)
Young, Susan S.; Moore, William E.
2002-05-01
This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.
Agile Multi-Scale Decompositions for Automatic Image Registration
NASA Technical Reports Server (NTRS)
Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline
2016-01-01
In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
Multi-scale structural community organisation of the human genome.
Boulos, Rasha E; Tremblay, Nicolas; Arneodo, Alain; Borgnat, Pierre; Audit, Benjamin
2017-04-11
Structural interaction frequency matrices between all genome loci are now experimentally achievable thanks to high-throughput chromosome conformation capture technologies. This ensues a new methodological challenge for computational biology which consists in objectively extracting from these data the structural motifs characteristic of genome organisation. We deployed the fast multi-scale community mining algorithm based on spectral graph wavelets to characterise the networks of intra-chromosomal interactions in human cell lines. We observed that there exist structural domains of all sizes up to chromosome length and demonstrated that the set of structural communities forms a hierarchy of chromosome segments. Hence, at all scales, chromosome folding predominantly involves interactions between neighbouring sites rather than the formation of links between distant loci. Multi-scale structural decomposition of human chromosomes provides an original framework to question structural organisation and its relationship to functional regulation across the scales. By construction the proposed methodology is independent of the precise assembly of the reference genome and is thus directly applicable to genomes whose assembly is not fully determined.
Interacting Multiscale Acoustic Vortices as Coherent Excitations in Dust Acoustic Wave Turbulence
NASA Astrophysics Data System (ADS)
Lin, Po-Cheng; I, Lin
2018-03-01
In this work, using three-dimensional intermittent dust acoustic wave turbulence in a dusty plasma as a platform and multidimensional empirical mode decomposition into different-scale modes in the 2 +1 D spatiotemporal space, we demonstrate the experimental observation of the interacting multiscale acoustic vortices, winding around wormlike amplitude hole filaments coinciding with defect filaments, as the basic coherent excitations for acoustic-type wave turbulence. For different decomposed modes, the self-similar rescaled stretched exponential lifetime histograms of amplitude hole filaments, and the self-similar power spectra of dust density fluctuations, indicate that similar dynamical rules are followed over a wide range of scales. In addition to the intermode acoustic vortex pair generation, propagation, or annihilation, the intra- and intermode interactions of acoustic vortices with the same or opposite helicity, their entanglement and synchronization, are found to be the key dynamical processes in acoustic wave turbulence, akin to the interacting multiscale vortices around wormlike cores observed in hydrodynamic turbulence.
Patched bimetallic surfaces are active catalysts for ammonia decomposition.
Guo, Wei; Vlachos, Dionisios G
2015-10-07
Ammonia decomposition is often used as an archetypical reaction for predicting new catalytic materials and understanding the very reason of why some reactions are sensitive on material's structure. Core-shell or surface-segregated bimetallic nanoparticles expose outstanding activity for many heterogeneously catalysed reactions but the reasons remain elusive owing to the difficulties in experimentally characterizing active sites. Here by performing multiscale simulations in ammonia decomposition on various nickel loadings on platinum (111), we show that the very high activity of core-shell structures requires patches of the guest metal to create and sustain dual active sites: nickel terraces catalyse N-H bond breaking and nickel edge sites drive atomic nitrogen association. The structure sensitivity on these active catalysts depends profoundly on reaction conditions due to kinetically competing relevant elementary reaction steps. We expose a remarkable difference in active sites between transient and steady-state studies and provide insights into optimal material design.
Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures
NASA Astrophysics Data System (ADS)
Schweigert, Igor
2014-03-01
Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and the release of the chemical energy. Mesoscale modeling of these ``hot spots'' requires a chemical reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DOD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.
Toward a reaction rate model of condensed-phase RDX decomposition under high temperatures
NASA Astrophysics Data System (ADS)
Schweigert, Igor
2015-06-01
Shock ignition of energetic molecular solids is driven by microstructural heterogeneities, at which even moderate stresses can result in sufficiently high temperatures to initiate material decomposition and chemical energy release. Mesoscale modeling of these ``hot spots'' requires a reaction rate model that describes the energy release with a sub-microsecond resolution and under a wide range of temperatures. No such model is available even for well-studied energetic materials such as RDX. In this presentation, I will describe an ongoing effort to develop a reaction rate model of condensed-phase RDX decomposition under high temperatures using first-principles molecular dynamics, transition-state theory, and reaction network analysis. This work was supported by the Naval Research Laboratory, by the Office of Naval Research, and by the DoD High Performance Computing Modernization Program Software Application Institute for Multiscale Reactive Modeling of Insensitive Munitions.
Patched bimetallic surfaces are active catalysts for ammonia decomposition
NASA Astrophysics Data System (ADS)
Guo, Wei; Vlachos, Dionisios G.
2015-10-01
Ammonia decomposition is often used as an archetypical reaction for predicting new catalytic materials and understanding the very reason of why some reactions are sensitive on material's structure. Core-shell or surface-segregated bimetallic nanoparticles expose outstanding activity for many heterogeneously catalysed reactions but the reasons remain elusive owing to the difficulties in experimentally characterizing active sites. Here by performing multiscale simulations in ammonia decomposition on various nickel loadings on platinum (111), we show that the very high activity of core-shell structures requires patches of the guest metal to create and sustain dual active sites: nickel terraces catalyse N-H bond breaking and nickel edge sites drive atomic nitrogen association. The structure sensitivity on these active catalysts depends profoundly on reaction conditions due to kinetically competing relevant elementary reaction steps. We expose a remarkable difference in active sites between transient and steady-state studies and provide insights into optimal material design.
Oanca, Gabriel; Stare, Jernej; Mavri, Janez
2017-12-01
This work scrutinizes kinetics of decomposition of adrenaline catalyzed by monoamine oxidase (MAO) A and B enzymes, a process controlling the levels of adrenaline in the central nervous system and other tissues. Experimental kinetic data for MAO A and B catalyzed decomposition of adrenaline are reported only in the form of the maximum reaction rate. Therefore, we estimated the experimental free energy barriers form the kinetic data of closely related systems using regression method, as was done in our previous study. By using multiscale simulation on the Empirical Valence Bond (EVB) level, we studied the chemical reactivity of the MAO A catalyzed decomposition of adrenaline and we obtained a value of activation free energy of 17.3 ± 0.4 kcal/mol. The corresponding value for MAO B is 15.7 ± 0.7 kcal/mol. Both values are in good agreement with the estimated experimental barriers of 16.6 and 16.0 kcal/mol for MAO A and MAO B, respectively. The fact that we reproduced the kinetic data and preferential catalytic effect of MAO B over MAO A gives additional support to the validity of the proposed hydride transfer mechanism. Furthermore, we demonstrate that adrenaline is preferably involved in the reaction in a neutral rather than in a protonated form due to considerably higher barriers computed for the protonated adrenaline substrate. The results are discussed in the context of chemical mechanism of MAO enzymes and possible applications of multiscale simulation to rationalize the effects of MAO activity on adrenaline level. © 2017 Wiley Periodicals, Inc.
Ge, Ni-Na; Wei, Yong-Kai; Ji, Guang-Fu; Chen, Xiang-Rong; Zhao, Feng; Wei, Dong-Qing
2012-11-26
We have performed quantum-based multiscale simulations to study the initial chemical processes of condensed-phase octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under shock wave loading. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. The results show that the initial decomposition of shocked HMX is triggered by the N-NO(2) bond breaking under the low velocity impact (8 km/s). As the shock velocity increases (11 km/s), the homolytic cleavage of the N-NO(2) bond is suppressed under high pressure, the C-H bond dissociation becomes the primary pathway for HMX decomposition in its early stages. It is accompanied by a five-membered ring formation and hydrogen transfer from the CH(2) group to the -NO(2) group. Our simulations suggest that the initial chemical processes of shocked HMX are dependent on the impact velocity, which gain new insights into the initial decomposition mechanism of HMX upon shock loading at the atomistic level, and have important implications for understanding and development of energetic materials.
EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis
NASA Astrophysics Data System (ADS)
Žvokelj, Matej; Zupan, Samo; Prebil, Ivan
2016-05-01
A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.
Modeling Complex Phenomena Using Multiscale Time Sequences
2009-08-24
measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and
NASA Technical Reports Server (NTRS)
Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.
2017-01-01
In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.
NASA Astrophysics Data System (ADS)
Facheris, L.; Tanelli, S.; Giuli, D.
A method is presented for analyzing the storm motion through the application of a nowcasting technique based on radar echoes tracking through multiscale correlation. The application of the correlation principle to weather radar image processing - the so called TREC (Tracking Radar Echoes by Correlation) and derived algorithms - is de- scribed in [1] and in references cited therein. The block matching approach exploited there is typical of video compression applications, whose purpose is to remove the temporal correlation between two subsequent frames of a sequence of images. In par- ticular, the wavelet decomposition approach to motion estimation seems particularly suitable for weather radar maps. In fact, block matching is particularly efficient when the images have a sufficient level of contrast. Though this does not hold for original resolution radar maps, it can be easily obtained by changing the resolution level by means of the wavelet decomposition. The technique first proposed in [2] (TREMC - Tracking of Radar Echoes by means of Multiscale Correlation) adopts a multiscale, multiresolution, and partially overlapped, block grid which adapts to the radar reflec- tivity pattern. Multiresolution decomposition is performed through 2D wavelet based filtering. Correlation coefficients are calculated taking after preliminary screening of unreliable data (e.g. those affected by ground clutter or beam shielding), so as to avoid strong undesired motion estimation biases due to the presence of stationary features. Such features are detected by a previous analysis carried out as discussed in [2]. In this paper, motion fields obtained by analyzing precipitation events over the Arno river basin are compared to the related Doppler velocity fields in order to identify growth and decay areas and orographic effects. Data presented have been collected by the weather radar station POLAR 55C sited in Montagnana (Firenze-Italy), a polarimetric C-band system providing absolute and differential reflectivity maps, mean Doppler velocity and Doppler spread maps with a resolution of 125/250 m [3]. [1] Li L. Schmid W. and Joss J., Nowcasting of motion and growth of precipitation with radar over a complex orography Journal of Applied Meteorology, vol. 34, pp. 1286-1300, 1995. [2] L.Facheris, S. Tanelli, F. Argenti, D.Giuli, SWavelet Applica- & cedil;tions to Multiparameter Weather Radar AnalysisT, to be published on SInformation & cedil;Processing for Remote SensingT, Prof. C.H. Chen Ed. for World Scientific Publish- 1 ing Co., pagg. 187-207, 1999 [3] Scarchilli G. Gorgucci E. Giuli D. Facheris L. Freni A. and Vezzani G., Arno Project: Radar System and objectives., Proceedings 25th In- ternational Conference on Radar Meteorology, Paris, France, 24-28 June 1991, pp. 805-808 2
Patched bimetallic surfaces are active catalysts for ammonia decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Wei; Vlachos, Dionisios G.
In this study, ammonia decomposition is often used as an archetypical reaction for predicting new catalytic materials and understanding the very reason of why some reactions are sensitive on material’s structure. Core–shell or surface-segregated bimetallic nanoparticles expose outstanding activity for many heterogeneously catalysed reactions but the reasons remain elusive owing to the difficulties in experimentally characterizing active sites. Here by performing multiscale simulations in ammonia decomposition on various nickel loadings on platinum (111), we show that the very high activity of core–shell structures requires patches of the guest metal to create and sustain dual active sites: nickel terraces catalyse N-Hmore » bond breaking and nickel edge sites drive atomic nitrogen association. The structure sensitivity on these active catalysts depends profoundly on reaction conditions due to kinetically competing relevant elementary reaction steps. We expose a remarkable difference in active sites between transient and steady-state studies and provide insights into optimal material design.« less
Patched bimetallic surfaces are active catalysts for ammonia decomposition
Guo, Wei; Vlachos, Dionisios G.
2015-10-07
In this study, ammonia decomposition is often used as an archetypical reaction for predicting new catalytic materials and understanding the very reason of why some reactions are sensitive on material’s structure. Core–shell or surface-segregated bimetallic nanoparticles expose outstanding activity for many heterogeneously catalysed reactions but the reasons remain elusive owing to the difficulties in experimentally characterizing active sites. Here by performing multiscale simulations in ammonia decomposition on various nickel loadings on platinum (111), we show that the very high activity of core–shell structures requires patches of the guest metal to create and sustain dual active sites: nickel terraces catalyse N-Hmore » bond breaking and nickel edge sites drive atomic nitrogen association. The structure sensitivity on these active catalysts depends profoundly on reaction conditions due to kinetically competing relevant elementary reaction steps. We expose a remarkable difference in active sites between transient and steady-state studies and provide insights into optimal material design.« less
On a sparse pressure-flow rate condensation of rigid circulation models
Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.
2015-01-01
Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-13
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
The Total Variation Regularized L1 Model for Multiscale Decomposition
2006-01-01
L1 fidelity term, and presented impressive and successful applications of the TV-L1 model to impulsive noise removal and outlier identification. She...used to filter 1D signal [3], to remove impulsive (salt-n- pepper) noise [35], to extract textures from natural images [45], to remove varying...34, 35, 36] discovery of the usefulness of this model for removing impul- sive noise , Chan and Esedoglu’s [17] further analysis of this model, and a
3D deblending of simultaneous source data based on 3D multi-scale shaping operator
NASA Astrophysics Data System (ADS)
Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Gong, Fei; Huang, Weilin
2018-04-01
We propose an iterative three-dimensional (3D) deblending scheme using 3D multi-scale shaping operator to separate 3D simultaneous source data. The proposed scheme is based on the property that signal is coherent, whereas interference is incoherent in some domains, e.g., common receiver domain and common midpoint domain. In two-dimensional (2D) blended record, the coherency difference of signal and interference is in only one spatial direction. Compared with 2D deblending, the 3D deblending can take more sparse constraints into consideration to obtain better performance, e.g., in 3D common receiver gather, the coherency difference is in two spatial directions. Furthermore, with different levels of coherency, signal and interference distribute in different scale curvelet domains. In both 2D and 3D blended records, most coherent signal locates in coarse scale curvelet domain, while most incoherent interference distributes in fine scale curvelet domain. The scale difference is larger in 3D deblending, thus, we apply the multi-scale shaping scheme to further improve the 3D deblending performance. We evaluate the performance of 3D and 2D deblending with the multi-scale and global shaping operators, respectively. One synthetic and one field data examples demonstrate the advantage of the 3D deblending with 3D multi-scale shaping operator.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F; Harger, Matthew; Torabifard, Hedieh; Cisneros, G Andrés; Schnieders, Michael J; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y; Ponder, Jay W; Piquemal, Jean-Philip
2018-01-28
We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed.
A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation
NASA Astrophysics Data System (ADS)
Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming
2018-03-01
This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de
The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
NASA Astrophysics Data System (ADS)
Fu, Yu-Hang; Bai, Lin; Luo, Kai-Hong; Jin, Yong; Cheng, Yi
2017-04-01
In this work, we propose a general approach for modeling mass transfer and reaction of dilute solute(s) in incompressible three-phase flows by introducing a collision operator in lattice Boltzmann (LB) method. An LB equation was used to simulate the solute dynamics among three different fluids, in which the newly expanded collision operator was used to depict the interface behavior of dilute solute(s). The multiscale analysis showed that the presented model can recover the macroscopic transport equations derived from the Maxwell-Stefan equation for dilute solutes in three-phase systems. Compared with the analytical equation of state of solute and dynamic behavior, these results are proven to constitute a generalized framework to simulate solute distributions in three-phase flows, including compound soluble in one phase, compound adsorbed on single-interface, compound in two phases, and solute soluble in three phases. Moreover, numerical simulations of benchmark cases, such as phase decomposition, multilayered planar interfaces, and liquid lens, were performed to test the stability and efficiency of the model. Finally, the multiphase mass transfer and reaction in Janus droplet transport in a straight microchannel were well reproduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara Gibson
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties ofmore » the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.« less
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Thomas; Efendiev, Yalchin; Tchelepi, Hamdi
2016-05-24
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.« less
Multiscale analysis and computation for flows in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efendiev, Yalchin; Hou, T. Y.; Durlofsky, L. J.
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.« less
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2017-12-01
Most existing soil carbon cycle models have modeled the moisture and temperature dependence of soil respiration using deterministic response functions. However, empirical data suggest abundant variability in both of these dependencies. We here use the recently developed SUPECA (Synthesizing Unit and Equilibrium Chemistry Approximation) theory and a published dynamic energy budget based microbial model to investigate how soil carbon decomposition responds to changes in soil moisture and temperature under the influence of organo-mineral interactions. We found that both the temperature and moisture responses are hysteretic and cannot be represented by deterministic functions. We then evaluate how the multi-scale variability in temperature and moisture forcing affect soil carbon decomposition. Our results indicate that when the model is run in scenarios mimicking laboratory incubation experiments, the often-observed temperature and moisture response functions can be well reproduced. However, when such response functions are used for model extrapolation involving more transient variability in temperature and moisture forcing (as found in real ecosystems), the dynamic model that explicitly accounts for hysteresis in temperature and moisture dependency produces significantly different estimations of soil carbon decomposition, suggesting there are large biases in models that do not resolve such hysteresis. We call for more studies on organo-mineral interactions to improve modeling of such hysteresis.
Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine
2016-08-18
We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows, I: Basic Theory
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.
2003-01-01
The objective of this paper is to extend our recently developed highly parallelizable nonlinear stable high order schemes for complex multiscale hydrodynamic applications to the viscous MHD equations. These schemes employed multiresolution wavelets as adaptive numerical dissipation controls t o limit the amount of and to aid the selection and/or blending of the appropriate types of dissipation to be used. The new scheme is formulated for both the conservative and non-conservative form of the MHD equations in curvilinear grids. The four advantages of the present approach over existing MHD schemes reported in the open literature are as follows. First, the scheme is constructed for long-time integrations of shock/turbulence/combustion MHD flows. Available schemes are too diffusive for long-time integrations and/or turbulence/combustion problems. Second, unlike exist- ing schemes for the conservative MHD equations which suffer from ill-conditioned eigen- decompositions, the present scheme makes use of a well-conditioned eigen-decomposition obtained from a minor modification of the eigenvectors of the non-conservative MHD equations t o solve the conservative form of the MHD equations. Third, this approach of using the non-conservative eigensystem when solving the conservative equations also works well in the context of standard shock-capturing schemes for the MHD equations. Fourth, a new approach to minimize the numerical error of the divergence-free magnetic condition for high order schemes is introduced. Numerical experiments with typical MHD model problems revealed the applicability of the newly developed schemes for the MHD equations.
Simplifying Differential Equations for Multiscale Feynman Integrals beyond Multiple Polylogarithms.
Adams, Luise; Chaubey, Ekta; Weinzierl, Stefan
2017-04-07
In this Letter we exploit factorization properties of Picard-Fuchs operators to decouple differential equations for multiscale Feynman integrals. The algorithm reduces the differential equations to blocks of the size of the order of the irreducible factors of the Picard-Fuchs operator. As a side product, our method can be used to easily convert the differential equations for Feynman integrals which evaluate to multiple polylogarithms to an ϵ form.
Dynamics of a neural system with a multiscale architecture
Breakspear, Michael; Stam, Cornelis J
2005-01-01
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448
Components for Atomistic-to-Continuum Multiscale Modeling of Flow in Micro- and Nanofluidic Systems
Adalsteinsson, Helgi; Debusschere, Bert J.; Long, Kevin R.; ...
2008-01-01
Micro- and nanofluidics pose a series of significant challenges for science-based modeling. Key among those are the wide separation of length- and timescales between interface phenomena and bulk flow and the spatially heterogeneous solution properties near solid-liquid interfaces. It is not uncommon for characteristic scales in these systems to span nine orders of magnitude from the atomic motions in particle dynamics up to evolution of mass transport at the macroscale level, making explicit particle models intractable for all but the simplest systems. Recently, atomistic-to-continuum (A2C) multiscale simulations have gained a lot of interest as an approach to rigorously handle particle-levelmore » dynamics while also tracking evolution of large-scale macroscale behavior. While these methods are clearly not applicable to all classes of simulations, they are finding traction in systems in which tight-binding, and physically important, dynamics at system interfaces have complex effects on the slower-evolving large-scale evolution of the surrounding medium. These conditions allow decomposition of the simulation into discrete domains, either spatially or temporally. In this paper, we describe how features of domain decomposed simulation systems can be harnessed to yield flexible and efficient software for multiscale simulations of electric field-driven micro- and nanofluidics.« less
Multi-scale clustering of functional data with application to hydraulic gradients in wetlands
Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.
2011-01-01
A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.
Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coon, Ethan; Porter, Mark L.; Kang, Qinjun
2012-06-18
In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
NASA Astrophysics Data System (ADS)
Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang
2017-04-01
Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.
Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering
NASA Technical Reports Server (NTRS)
Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)
2001-01-01
Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Multi-scale study of the isotope effect in ISTTOK
NASA Astrophysics Data System (ADS)
Liu, B.; Silva, C.; Figueiredo, H.; Pedrosa, M. A.; van Milligen, B. Ph.; Pereira, T.; Losada, U.; Hidalgo, C.
2016-05-01
The isotope effect, namely the isotope dependence of plasma confinement, is still one of the principal scientific conundrums facing the magnetic fusion community. We have investigated the impact of isotope mass on multi-scale mechanisms, including the characterization of radial correlation lengths (\\boldsymbol{L}{r} ) and long-range correlations (LRC) of plasma fluctuations using multi-array Langmuir probe system, in hydrogen (H) and deuterium (D) plasmas in the ISTTOK tokamak. We found that when changing plasma composition from the H dominated to D dominated, the LRC amplitude increased markedly (10-30%) and the \\boldsymbol{L}{r} increased slightly (~10%). The particle confinement also improved by about 50%. The changes of LRC and \\boldsymbol{L}{r} are congruent with previous findings in the TEXTOR tokamak (Xu et al 2013 Phys. Rev. Lett. 110 265005). In addition, using biorthogonal decomposition, both geodesic acoustic modes and very low frequency (<5 kHz) coherent modes were found to be contributing to LRC.
A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains
NASA Astrophysics Data System (ADS)
Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.
2018-02-01
A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Change Detection of Remote Sensing Images by Dt-Cwt and Mrf
NASA Astrophysics Data System (ADS)
Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.
2017-05-01
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
He, Min; Yan, Pan; Yang, Zhi-Yu; Zhang, Zhi-Min; Yang, Tian-Biao; Hong, Liang
2018-03-15
Head Space/Solid Phase Micro-Extraction (HS-SPME) coupled with Gas Chromatography/Mass Spectrometer (GC/MS) was used to determine the volatile/heat-labile components in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes. Facing co-eluting peaks in k samples, a trilinear structure was reconstructed to obtain the second-order advantage. The retention time (RT) shift with multi-channel detection signals for different samples has been vital in maintaining the trilinear structure, thus a modified multiscale peak alignment (mMSPA) method was proposed in this paper. The peak position and peak width of representative ion profile were firstly detected by mMSPA using Continuous Wavelet Transform with Haar wavelet as the mother wavelet (Haar CWT). Then, the raw shift was confirmed by Fast Fourier Transform (FFT) cross correlation calculation. To obtain the optimal shift, Haar CWT was again used to detect the subtle deviations and be amalgamated in calculation. Here, to ensure there is no peaks shape alternation, the alignment was performed in local domains of data matrices, and all data points in the peak zone were moved via linear interpolation in non-peak parts. Finally, chemical components of interest in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes were analyzed by HS-SPME-GCMS and mMSPA-alternating trilinear decomposition (ATLD) resolution. As a result, the concentration variation between herbs and their pharmaceutical products can provide a scientific basic for the quality standard establishment of traditional Chinese medicines. Copyright © 2018 Elsevier B.V. All rights reserved.
Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F.; Harger, Matthew; Torabifard, Hedieh; Cisneros, G. Andrés; Schnieders, Michael J.; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y.; Ponder, Jay W.
2017-01-01
We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over a single-core computation is observed for the largest systems. The extension of the present CPU implementation of Tinker-HP to other computational platforms is discussed. PMID:29732110
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2016-02-03
A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
An Integrated Chemical Reactor-Heat Exchanger Based on Ammonium Carbamate (POSTPRINT)
2012-10-01
With the scrubber and exhaust operating, the test cell ammonia concentration remains below 5 ppm. To further reduce NH3 release into the test cell...material has a high decomposition enthalpy and exhibits decomposition over a wide range of temperatures. AC decomposition produces ammonia and carbon...installation due to toxic gas ( ammonia ) generation during operation. Therefore, the experiment is intended to be remotely operated. A secondary control
Influences of operational practices on municipal solid waste landfill storage capacity.
Li, Yu-Chao; Liu, Hai-Long; Cleall, Peter John; Ke, Han; Bian, Xue-Cheng
2013-03-01
The quantitative effects of three operational factors, that is initial compaction, decomposition condition and leachate level, on municipal solid waste (MSW) landfill settlement and storage capacity are investigated in this article via consideration of a hypothetical case. The implemented model for calculating landfill compression displacement is able to consider decreases in compressibility induced by biological decomposition and load dependence of decomposition compression for the MSW. According to the investigation, a significant increase in storage capacity can be achieved by intensive initial compaction, adjustment of decomposition condition and lowering of leachate levels. The quantitative investigation presented aims to encourage landfill operators to improve management to enhance storage capacity. Furthermore, improving initial compaction and creating a preferential decomposition condition can also significantly reduce operational and post-closure settlements, respectively, which helps protect leachate and gas management infrastructure and monitoring equipment in modern landfills.
Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Simons, F. J.; Bozdag, E.
2014-12-01
We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.
Multiscale Simulations of Magnetic Island Coalescence
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2010-01-01
We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.
Reactive Goal Decomposition Hierarchies for On-Board Autonomy
NASA Astrophysics Data System (ADS)
Hartmann, L.
2002-01-01
As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.
A general approach to regularizing inverse problems with regional data using Slepian wavelets
NASA Astrophysics Data System (ADS)
Michel, Volker; Simons, Frederik J.
2017-12-01
Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.
This paper examines the operational performance of the Community Multiscale Air Quality (CMAQ) model simulations for 2002 - 2006 using both 36-km and 12-km horizontal grid spacing, with a primary focus on the performance of the CMAQ model in predicting wet deposition of sulfate (...
The Models 3 / Community Multiscale Model for Air Quality (CMAQ) has been designed for one-atmosphere assessments for multiple pollutants including ozone (O3), particulate matter (PM10, PM2.5), and acid / nutrient deposition. In this paper we report initial results of our evalu...
2007-11-04
hierarchical decomposition of images 13 original f uRO vRO + 128 uλ0 ∑1 i=0 uλi ∑3 i=0 uλi ∑5 i=0 uλi ∑7 i=0 uλi ∑9 i=0 uλi ∑11 i=0 uλi ∑13 i=0 uλi...restoration uRO , vRO = f − uRO + 128, rmse=0.1066. RO parameters: λ = 2000, h = 1, 4t = 0.1. 2nd to last rows, left to right: hierarchical recovery from...for given w ∈ 14 Eitan Tadmor, Suzanne Nezzar & Luminita Vese f uRO vRO + 128 uλ0 ∑1 i=0 uλi ∑2 i=0 uλi ∑3 i=0 uλi ∑4 i=0 uλi ∑5 i=0 uλi vλ5 Figure
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.
2010-03-01
Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.
Shan, Tzu-Ray; Wixom, Ryan R; Mattsson, Ann E; Thompson, Aidan P
2013-01-24
The dependence of the reaction initiation mechanism of pentaerythritol tetranitrate (PETN) on shock orientation and shock strength is investigated with molecular dynamics simulations using a reactive force field and the multiscale shock technique. In the simulations, a single crystal of PETN is shocked along the [110], [001], and [100] orientations with shock velocities in the range 3-10 km/s. Reactions occur with shock velocities of 6 km/s or stronger, and reactions initiate through the dissociation of nitro and nitrate groups from the PETN molecules. The most sensitive orientation is [110], while [100] is the most insensitive. For the [001] orientation, PETN decomposition via nitro group dissociation is the dominant reaction initiation mechanism, while for the [110] and [100] orientations the decomposition is via mixed nitro and nitrate group dissociation. For shock along the [001] orientation, we find that CO-NO(2) bonds initially acquire more kinetic energy, facilitating nitro dissociation. For the other two orientations, C-ONO(2) bonds acquire more kinetic energy, facilitating nitrate group dissociation.
Singular value decomposition based feature extraction technique for physiological signal analysis.
Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C
2012-06-01
Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.
Wang, Jinjia; Liu, Yuan
2015-04-01
This paper presents a feature extraction method based on multivariate empirical mode decomposition (MEMD) combining with the power spectrum feature, and the method aims at the non-stationary electroencephalogram (EEG) or magnetoencephalogram (MEG) signal in brain-computer interface (BCI) system. Firstly, we utilized MEMD algorithm to decompose multichannel brain signals into a series of multiple intrinsic mode function (IMF), which was proximate stationary and with multi-scale. Then we extracted and reduced the power characteristic from each IMF to a lower dimensions using principal component analysis (PCA). Finally, we classified the motor imagery tasks by linear discriminant analysis classifier. The experimental verification showed that the correct recognition rates of the two-class and four-class tasks of the BCI competition III and competition IV reached 92.0% and 46.2%, respectively, which were superior to the winner of the BCI competition. The experimental proved that the proposed method was reasonably effective and stable and it would provide a new way for feature extraction.
Asymmetric multiscale detrended fluctuation analysis of California electricity spot price
NASA Astrophysics Data System (ADS)
Fan, Qingju
2016-01-01
In this paper, we develop a new method called asymmetric multiscale detrended fluctuation analysis, which is an extension of asymmetric detrended fluctuation analysis (A-DFA) and can assess the asymmetry correlation properties of series with a variable scale range. We investigate the asymmetric correlations in California 1999-2000 power market after filtering some periodic trends by empirical mode decomposition (EMD). Our findings show the coexistence of symmetric and asymmetric correlations in the price series of 1999 and strong asymmetric correlations in 2000. What is more, we detect subtle correlation properties of the upward and downward price series for most larger scale intervals in 2000. Meanwhile, the fluctuations of Δα(s) (asymmetry) and | Δα(s) | (absolute asymmetry) are more significant in 2000 than that in 1999 for larger scale intervals, and they have similar characteristics for smaller scale intervals. We conclude that the strong asymmetry property and different correlation properties of upward and downward price series for larger scale intervals in 2000 have important implications on the collapse of California power market, and our findings shed a new light on the underlying mechanisms of power price.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Ge, Ni-Na; Wei, Yong-Kai; Song, Zhen-Fei; Chen, Xiang-Rong; Ji, Guang-Fu; Zhao, Feng; Wei, Dong-Qing
2014-07-24
Molecular dynamics simulations in conjunction with multiscale shock technique (MSST) are performed to study the initial chemical processes and the anisotropy of shock sensitivity of the condensed-phase HMX under shock loadings applied along the a, b, and c lattice vectors. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. Our results show that there is a difference between lattice vector a (or c) and lattice vector b in the response to a shock wave velocity of 11 km/s, which is investigated through reaction temperature and relative sliding rate between adjacent slipping planes. The response along lattice vectors a and c are similar to each other, whose reaction temperature is up to 7000 K, but quite different along lattice vector b, whose reaction temperature is only up to 4000 K. When compared with shock wave propagation along the lattice vectors a (18 Å/ps) and c (21 Å/ps), the relative sliding rate between adjacent slipping planes along lattice vector b is only 0.2 Å/ps. Thus, the small relative sliding rate between adjacent slipping planes results in the temperature and energy under shock loading increasing at a slower rate, which is the main reason leading to less sensitivity under shock wave compression along lattice vector b. In addition, the C-H bond dissociation is the primary pathway for HMX decomposition in early stages under high shock loading from various directions. Compared with the observation for shock velocities V(imp) = 10 and 11 km/s, the homolytic cleavage of N-NO2 bond was obviously suppressed with increasing pressure.
He, Zheng-Hua; Chen, Jun; Ji, Guang-Fu; Liu, Li-Min; Zhu, Wen-Jun; Wu, Qiang
2015-08-20
Despite extensive efforts on studying the decomposition mechanism of HMX under extreme condition, an intrinsic understanding of mechanical and chemical response processes, inducing the initial chemical reaction, is not yet achieved. In this work, the microscopic dynamic response and initial decomposition of β-HMX with (1 0 0) surface and molecular vacancy under shock condition, were explored by means of the self-consistent-charge density-functional tight-binding method (SCC-DFTB) in conjunction with multiscale shock technique (MSST). The evolutions of various bond lengths and charge transfers were analyzed to explore and understand the initial reaction mechanism of HMX. Our results discovered that the C-N bond close to major axes had less compression sensitivity and higher stretch activity. The charge was transferred mainly from the N-NO2 group along the minor axes and H atom to C atom during the early compression process. The first reaction of HMX primarily initiated with the fission of the molecular ring at the site of the C-N bond close to major axes. Further breaking of the molecular ring enhanced intermolecular interactions and promoted the cleavage of C-H and N-NO2 bonds. More significantly, the dynamic response behavior clearly depended on the angle between chemical bond and shock direction.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
This paper presents a comparison of the operational performance of two Community Multiscale Air Quality (CMAQ) model v4.7 simulations that utilize input data from the 5th generation Mesoscale Model MM5 and the Weather Research and Forecasting (WRF) meteorological models.
Chiverton, John P; Ige, Olubisi; Barnett, Stephanie J; Parry, Tony
2017-11-01
This paper is concerned with the modeling and analysis of the orientation and distance between steel fibers in X-ray micro-tomography data. The advantage of combining both orientation and separation in a model is that it helps provide a detailed understanding of how the steel fibers are arranged, which is easy to compare. The developed models are designed to summarize the randomness of the orientation distribution of the steel fibers both locally and across an entire volume based on multiscale entropy. Theoretical modeling, simulation, and application to real imaging data are shown here. The theoretical modeling of multiscale entropy for orientation includes a proof showing the final form of the multiscale taken over a linear range of scales. A series of image processing operations are also included to overcome interslice connectivity issues to help derive the statistical descriptions of the orientation distributions of the steel fibers. The results demonstrate that multiscale entropy provides unique insights into both simulated and real imaging data of steel fiber reinforced concrete.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
A novel fruit shape classification method based on multi-scale analysis
NASA Astrophysics Data System (ADS)
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
Phase unwrapping in digital holography based on non-subsampled contourlet transform
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-01-01
In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.
Navigation Operations for the Magnetospheric Multiscale Mission
NASA Technical Reports Server (NTRS)
Long, Anne; Farahmand, Mitra; Carpenter, Russell
2015-01-01
The Magnetospheric Multiscale (MMS) mission employs four identical spinning spacecraft flying in highly elliptical Earth orbits. These spacecraft will fly in a series of tetrahedral formations with separations of less than 10 km. MMS navigation operations use onboard navigation to satisfy the mission definitive orbit and time determination requirements and in addition to minimize operations cost and complexity. The onboard navigation subsystem consists of the Navigator GPS receiver with Goddard Enhanced Onboard Navigation System (GEONS) software, and an Ultra-Stable Oscillator. The four MMS spacecraft are operated from a single Mission Operations Center, which includes a Flight Dynamics Operations Area (FDOA) that supports MMS navigation operations, as well as maneuver planning, conjunction assessment and attitude ground operations. The System Manager component of the FDOA automates routine operations processes. The GEONS Ground Support System component of the FDOA provides the tools needed to support MMS navigation operations. This paper provides an overview of the MMS mission and associated navigation requirements and constraints and discusses MMS navigation operations and the associated MMS ground system components built to support navigation-related operations.
Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP
NASA Astrophysics Data System (ADS)
Shen, B.; Li, J. F.; Cheung, S.
2013-12-01
In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural filter period windows, the PEEMD achieves a speedup of over 100 times as compared to the original EEMD. The advanced GMM, 4D visualizations and PEEMD method are being used to examine the multiscale processes of Sandy and its environmental flows that may contribute to the extended lead-time predictability of Hurricane Sandy. Figure 1: Evolution of Hurricane Sandy (2012) as revealed by the advanced visualization.
Nakasaki, Kiyohiko; Ohtaki, Akihito
2002-01-01
Using dog food as a model of the organic waste that comprises composting raw material, the degradation pattern of organic materials was examined by continuously measuring the quantity of CO2 evolved during the composting process in both batch and fed-batch operations. A simple numerical model was made on the basis of three suppositions for describing the organic matter decomposition in the batch operation. First, a certain quantity of carbon in the dog food was assumed to be recalcitrant to degradation in the composting reactor within the retention time allowed. Second, it was assumed that the decomposition rate of carbon is proportional to the quantity of easily degradable carbon, that is, the carbon recalcitrant to degradation was subtracted from the total carbon remaining in the dog food. Third, a certain lag time is assumed to occur before the start of active decomposition of organic matter in the dog food; this lag corresponds to the time required for microorganisms to proliferate and become active. It was then ascertained that the decomposition pattern for the organic matter in the dog food during the fed-batch operation could be predicted by the numerical model with the parameters obtained from the batch operation. This numerical model was modified so that the change in dry weight of composting materials could be obtained. The modified model was found suitable for describing the organic matter decomposition pattern in an actual fed-batch composting operation of the garbage obtained from a restaurant, approximately 10 kg d(-1) loading for 60 d.
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
NASA Astrophysics Data System (ADS)
Ellis, Matthew O. A.; Stamenova, Maria; Sanvito, Stefano
2017-12-01
There exists a significant challenge in developing efficient magnetic tunnel junctions with low write currents for nonvolatile memory devices. With the aim of analyzing potential materials for efficient current-operated magnetic junctions, we have developed a multi-scale methodology combining ab initio calculations of spin-transfer torque with large-scale time-dependent simulations using atomistic spin dynamics. In this work we introduce our multiscale approach, including a discussion on a number of possible schemes for mapping the ab initio spin torques into the spin dynamics. We demonstrate this methodology on a prototype Co/MgO/Co/Cu tunnel junction showing that the spin torques are primarily acting at the interface between the Co free layer and MgO. Using spin dynamics we then calculate the reversal switching times for the free layer and the critical voltages and currents required for such switching. Our work provides an efficient, accurate, and versatile framework for designing novel current-operated magnetic devices, where all the materials details are taken into account.
A multi-scale Q1/P0 approach to langrangian shock hydrodynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shashkov, Mikhail; Love, Edward; Scovazzi, Guglielmo
A new multi-scale, stabilized method for Q1/P0 finite element computations of Lagrangian shock hydrodynamics is presented. Instabilities (of hourglass type) are controlled by a stabilizing operator derived using the variational multi-scale analysis paradigm. The resulting stabilizing term takes the form of a pressure correction. With respect to currently implemented hourglass control approaches, the novelty of the method resides in its residual-based character. The stabilizing residual has a definite physical meaning, since it embeds a discrete form of the Clausius-Duhem inequality. Effectively, the proposed stabilization samples and acts to counter the production of entropy due to numerical instabilities. The proposed techniquemore » is applicable to materials with no shear strength, for which there exists a caloric equation of state. The stabilization operator is incorporated into a mid-point, predictor/multi-corrector time integration algorithm, which conserves mass, momentum and total energy. Encouraging numerical results in the context of compressible gas dynamics confirm the potential of the method.« less
Application of optimized multiscale mathematical morphology for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao
2017-04-01
In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
NASA Astrophysics Data System (ADS)
Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.
2017-12-01
A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Extraction of drainage networks from large terrain datasets using high throughput computing
NASA Astrophysics Data System (ADS)
Gong, Jianya; Xie, Jibo
2009-02-01
Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.
A Multiscale pipeline for the search of string-induced CMB anisotropies
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Movahed, S. M. S.; Farhang, M.; Ringeval, C.; Bouchet, F. R.
2018-03-01
We propose a multiscale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the cosmic microwave background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a CS contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately 10 per cent of the sky, and for different string tensions Gμ. On noiseless sky maps with an angular resolution of 0.9 arcmin, we show that our pipeline detects CSs with Gμ as low as Gμ ≳ 4.3 × 10-10. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to Gμ ≳ 1.2 × 10-7.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
Poisson–Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-01-01
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods. PMID:29596335
Low-carbon building assessment and multi-scale input-output analysis
NASA Astrophysics Data System (ADS)
Chen, G. Q.; Chen, H.; Chen, Z. M.; Zhang, Bo; Shao, L.; Guo, S.; Zhou, S. Y.; Jiang, M. M.
2011-01-01
Presented as a low-carbon building evaluation framework in this paper are detailed carbon emission account procedures for the life cycle of buildings in terms of nine stages as building construction, fitment, outdoor facility construction, transportation, operation, waste treatment, property management, demolition, and disposal for buildings, supported by integrated carbon intensity databases based on multi-scale input-output analysis, essential for low-carbon planning, procurement and supply chain design, and logistics management.
Spatiotemporal attention operator using isotropic contrast and regional homogeneity
NASA Astrophysics Data System (ADS)
Palenichka, Roman; Lakhssassi, Ahmed; Zaremba, Marek
2011-04-01
A multiscale operator for spatiotemporal isotropic attention is proposed to reliably extract attention points during image sequence analysis. Its consecutive local maxima indicate attention points as the centers of image fragments of variable size with high intensity contrast, region homogeneity, regional shape saliency, and temporal change presence. The scale-adaptive estimation of temporal change (motion) and its aggregation with the regional shape saliency contribute to the accurate determination of attention points in image sequences. Multilocation descriptors of an image sequence are extracted at the attention points in the form of a set of multidimensional descriptor vectors. A fast recursive implementation is also proposed to make the operator's computational complexity independent from the spatial scale size, which is the window size in the spatial averaging filter. Experiments on the accuracy of attention-point detection have proved the operator consistency and its high potential for multiscale feature extraction from image sequences.
Interscale energy transfer in the merger of wakes of a multiscale array of rectangular cylinders
NASA Astrophysics Data System (ADS)
Baj, Pawel; Buxton, Oliver R. H.
2017-11-01
The near wake of a flow past a multiscale array of bars is studied by means of particle image velocimetry (PIV). The aim of this research is to understand the nature of multiscale flows, where multiple coherent motions of nonuniform sizes and characteristic frequencies (i.e., sheddings of particular bars in our considered case) interact with each other. The velocity fields acquired from the experiments are triple decomposed into their mean, a number of coherent fluctuations, and their stochastic part according to a triple decomposition technique introduced recently by Baj et al., Phys. Fluids 27, 075104 (2015), 10.1063/1.4923744. This nonstandard approach allows us to monitor the interactions between different coherent fluctuations representative of sheddings of the particular bars. Further, additional equations governing the kinetic energy of the recognized velocity components are derived to provide better insight into the dynamics of these interactions. Interestingly, apart from the coherent fluctuations associated with sheddings, some additional, secondary coherent fluctuations are also recognized. These seem to appear as a result of nonlinear triadic interactions between the primary shedding modes when the two shedding structures of different characteristic frequencies are in close proximity to one another. The secondary coherent motions are almost exclusively supplied with energy by the primary coherent motions, whereas the latter are driven by the mean flow. It is also found that the coherent fluctuations play an important role in exciting the stochastic fluctuations, as the energy is not fed to the stochastic fluctuations directly from the mean flow but rather through the coherent modes.
More on boundary holographic Witten diagrams
NASA Astrophysics Data System (ADS)
Sato, Yoshiki
2018-01-01
In this paper we discuss geodesic Witten diagrams in general holographic conformal field theories with boundary or defect. In boundary or defect conformal field theory, two-point functions are nontrivial and can be decomposed into conformal blocks in two distinct ways; ambient channel decomposition and boundary channel decomposition. In our previous work [A. Karch and Y. Sato, J. High Energy Phys. 09 (2017) 121., 10.1007/JHEP09(2017)121] we only consider two-point functions of same operators. We generalize our previous work to a situation where operators in two-point functions are different. We obtain two distinct decomposition for two-point functions of different operators.
Assessing the effect of different treatments on decomposition rate of dairy manure.
Khalil, Tariq M; Higgins, Stewart S; Ndegwa, Pius M; Frear, Craig S; Stöckle, Claudio O
2016-11-01
Confined animal feeding operations (CAFOs) contribute to greenhouse gas emission, but the magnitude of these emissions as a function of operation size, infrastructure, and manure management are difficult to assess. Modeling is a viable option to estimate gaseous emission and nutrient flows from CAFOs. These models use a decomposition rate constant for carbon mineralization. However, this constant is usually determined assuming a homogenous mix of manure, ignoring the effects of emerging manure treatments. The aim of this study was to measure and compare the decomposition rate constants of dairy manure in single and three-pool decomposition models, and to develop an empirical model based on chemical composition of manure for prediction of a decomposition rate constant. Decomposition rate constants of manure before and after an anaerobic digester (AD), following coarse fiber separation, and fine solids removal were determined under anaerobic conditions for single and three-pool decomposition models. The decomposition rates of treated manure effluents differed significantly from untreated manure for both single and three-pool decomposition models. In the single-pool decomposition model, AD effluent containing only suspended solids had a relatively high decomposition rate of 0.060 d(-1), while liquid with coarse fiber and fine solids removed had the lowest rate of 0.013 d(-1). In the three-pool decomposition model, fast and slow decomposition rate constants (0.25 d(-1) and 0.016 d(-1) respectively) of untreated AD influent were also significantly different from treated manure fractions. A regression model to predict the decomposition rate of treated dairy manure fitted well (R(2) = 0.83) to observed data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators
Bai, Xiangzhi
2015-01-01
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229
Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.
Bai, Xiangzhi
2015-07-15
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
Developing Higher-Order Materials Knowledge Systems
NASA Astrophysics Data System (ADS)
Fast, Anthony Nathan
2011-12-01
Advances in computational materials science and novel characterization techniques have allowed scientists to probe deeply into a diverse range of materials phenomena. These activities are producing enormous amounts of information regarding the roles of various hierarchical material features in the overall performance characteristics displayed by the material. Connecting the hierarchical information over disparate domains is at the crux of multiscale modeling. The inherent challenge of performing multiscale simulations is developing scale bridging relationships to couple material information between well separated length scales. Much progress has been made in the development of homogenization relationships which replace heterogeneous material features with effective homogenous descriptions. These relationships facilitate the flow of information from lower length scales to higher length scales. Meanwhile, most localization relationships that link the information from a from a higher length scale to a lower length scale are plagued by computationally intensive techniques which are not readily integrated into multiscale simulations. The challenge of executing fully coupled multiscale simulations is augmented by the need to incorporate the evolution of the material structure that may occur under conditions such as material processing. To address these challenges with multiscale simulation, a novel framework called the Materials Knowledge System (MKS) has been developed. This methodology efficiently extracts, stores, and recalls microstructure-property-processing localization relationships. This approach is built on the statistical continuum theories developed by Kroner that express the localization of the response field at the microscale using a series of highly complex convolution integrals, which have historically been evaluated analytically. The MKS approach dramatically improves the accuracy of these expressions by calibrating the convolution kernels in these expressions to results from previously validated physics-based models. These novel tools have been validated for the elastic strain localization in moderate contrast dual-phase composites by direct comparisons with predictions from finite element model. The versatility of the approach is further demonstrated by its successful application to capturing the structure evolution during spinodal decomposition of a binary alloy. Lastly, some key features in the future application of the MKS approach are developed using the Portevin-le Chaterlier effect. It has been shown with these case studies that the MKS approach is capable of accurately reproducing the results from physics based models with a drastic reduction in computational requirements.
NASA Astrophysics Data System (ADS)
Mleczko, M.
2014-12-01
Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
3D shape decomposition and comparison for gallbladder modeling
NASA Astrophysics Data System (ADS)
Huang, Weimin; Zhou, Jiayin; Liu, Jiang; Zhang, Jing; Yang, Tao; Su, Yi; Law, Gim Han; Chui, Chee Kong; Chang, Stephen
2011-03-01
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Vessel Segmentation in Retinal Images Using Multi-scale Line Operator and K-Means Clustering.
Saffarzadeh, Vahid Mohammadi; Osareh, Alireza; Shadgar, Bita
2014-04-01
Detecting blood vessels is a vital task in retinal image analysis. The task is more challenging with the presence of bright and dark lesions in retinal images. Here, a method is proposed to detect vessels in both normal and abnormal retinal fundus images based on their linear features. First, the negative impact of bright lesions is reduced by using K-means segmentation in a perceptive space. Then, a multi-scale line operator is utilized to detect vessels while ignoring some of the dark lesions, which have intensity structures different from the line-shaped vessels in the retina. The proposed algorithm is tested on two publicly available STARE and DRIVE databases. The performance of the method is measured by calculating the area under the receiver operating characteristic curve and the segmentation accuracy. The proposed method achieves 0.9483 and 0.9387 localization accuracy against STARE and DRIVE respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Zhen; Voth, Gregory A., E-mail: gavoth@uchicago.edu
It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operatormore » are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.« less
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.
1994-01-01
Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.
Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo
2011-01-01
Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
NASA Astrophysics Data System (ADS)
Agurto, C.; Barriga, S.; Murray, V.; Murillo, S.; Zamora, G.; Bauman, W.; Pattichis, M.; Soliz, P.
2011-03-01
In the United States and most of the western world, the leading causes of vision impairment and blindness are age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma. In the last decade, research in automatic detection of retinal lesions associated with eye diseases has produced several automatic systems for detection and screening of AMD, DR, and glaucoma. However. advanced, sight-threatening stages of DR and AMD can present with lesions not commonly addressed by current approaches to automatic screening. In this paper we present an automatic eye screening system based on multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions that addresses not only the early stages, but also advanced stages of retinal and optic nerve disease. Ten different experiments were performed in which abnormal features such as neovascularization, drusen, exudates, pigmentation abnormalities, geographic atrophy (GA), and glaucoma were classified. The algorithm achieved an accuracy detection range of [0.77 to 0.98] area under the ROC curve for a set of 810 images. When set to a specificity value of 0.60, the sensitivity of the algorithm to the detection of abnormal features ranged between 0.88 and 1.00. Our system demonstrates that, given an appropriate training set, it is possible to use a unique algorithm to detect a broad range of eye diseases.
Understanding perception of active noise control system through multichannel EEG analysis.
Bagha, Sangeeta; Tripathy, R K; Nanda, Pranati; Preetam, C; Das, Debi Prasad
2018-06-01
In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% ( p < 0.001 ) and 99.31% ( p < 0.001 ), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.
NASA Astrophysics Data System (ADS)
Sato, Kazuhisa; Tashiro, Shunya; Matsunaga, Shuhei; Yamaguchi, Yohei; Kiguchi, Takanori; Konno, Toyohiko J.
2018-07-01
We have studied three-dimensional (3D) structures and growth processes of 14H-type long-period stacking order (LPSO) formed in Mg97Zn1Gd2 cast alloys by single tilt-axis electron tomography (ET) using high-angle annular dark-field scanning transmission electron microscopy. Evolution of the solute-enriched stacking faults (SFs) and the 14H LPSO by ageing were visualised in 3D with a high spatial resolution in multi-scale fields of views from a few nanometres to 10 μm. Lateral growth of the solute-enriched SFs and the LPSO in the (0 0 0 1)Mg plane is notable compared to the out-of-plane growth in the [0 0 0 1]Mg direction. The 14H LPSO grows at the cost of decomposition of the (Mg, Zn)3Gd-type precipitates, and accompany a change of in-plane edge angles from 30 to 60°. We have updated the Time-Temperature-Transformation diagram for precipitation in Mg97Zn1Gd2 alloys: starting temperatures of both solute-enriched SFs and LPSO formation shifted to a shorter time side than those in the previous diagram.
Domain decomposition: A bridge between nature and parallel computers
NASA Technical Reports Server (NTRS)
Keyes, David E.
1992-01-01
Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.
Multiple multicontrol unitary operations: Implementation and applications
NASA Astrophysics Data System (ADS)
Lin, Qing
2018-04-01
The efficient implementation of computational tasks is critical to quantum computations. In quantum circuits, multicontrol unitary operations are important components. Here, we present an extremely efficient and direct approach to multiple multicontrol unitary operations without decomposition to CNOT and single-photon gates. With the proposed approach, the necessary two-photon operations could be reduced from O( n 3) with the traditional decomposition approach to O( n), which will greatly relax the requirements and make large-scale quantum computation feasible. Moreover, we propose the potential application to the ( n- k)-uniform hypergraph state.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
NASA Astrophysics Data System (ADS)
Janković, Bojan
2009-10-01
The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.
Can fractal objects operate as efficient inline mixers?
NASA Astrophysics Data System (ADS)
Laizet, Sylvain; Vassilicos, John; Turbulence, Mixing; Flow Control Group Team
2011-11-01
Recently, Hurst & Vassilicos, PoF 2007, Seoud & Vassilicos, PoF 2007, Mazellier & Vassilicos, PoF, 2010 used different multiscale grids to generate turbulence in a wind tunnel and have shown that complex multiscale boundary/initial conditions can drastically influence the behaviour of a turbulent flow, but that the detailled specific nature of the multiscale geometry matters too. Multiscale (fractal) objects can be designed to be immersed in any fluid flow where there is a need to control and design the turbulence generated by the object. Different types of multiscale objects can be designed as different types of energy-efficient mixers with varying degrees of high turbulent intensities, small pressure drop and downstream distance from the grid where the turbulence is most vigorous. Here, we present a 3D DNS study of the stirring and mixing of a passive scalar by turbulence generated with either a fractal square grid or a regular grid in the presence of a mean scalar gradient. The results show that: (1) there is a linear increase for the passive scalar variance for both grids, (2) the passive scalar variance is ten times bigger for the fractal grid, (3) the passive scalar flux is constant after the production region for both grids, (4) the passive scalar flux is enhanced by an order of magnitude for the fractal grid. We acknowledge support from EPSRC, UK.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-07-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.
Thermal Decomposition of RP-2 with Stabilizing Additives
2010-04-01
was analyzed by gas chromatography . The increase in a suite of light decomposition products was used to monitor the extent of decomposition. The...approximate initial pressure of 34.5 MPa (5000 psi). After each reaction, the thermally stressed liquid phase was analyzed by gas chromatography . The...and operational specifications for these fluids and facilitate new applications. 14,15 The thermophysical properties that are being measured include
Nitrogen-rich heterocycles as reactivity retardants in shocked insensitive explosives.
Manaa, M Riad; Reed, Evan J; Fried, Laurence E; Goldman, Nir
2009-04-22
We report the first quantum-based multiscale simulations to study the reactivity of shocked perfect crystals of the insensitive energetic material triaminotrinitrobenzene (TATB). Tracking chemical transformations of TATB experiencing overdriven shock speeds of 9 km/s for up to 0.43 ns and 10 km/s for up to 0.2 ns reveal high concentrations of nitrogen-rich heterocyclic clusters. Further reactivity of TATB toward the final decomposition products of fluid N(2) and solid carbon is inhibited due to the formation of these heterocycles. Our results thus suggest a new mechanism for carbon-rich explosive materials that precedes the slow diffusion-limited process of forming the bulk solid from carbon clusters and provide fundamental insight at the atomistic level into the long reaction zone of shocked TATB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Torres, C.; Streppa, L.; Arneodo, A.
2016-01-18
Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale methodmore » to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.
Scaling properties of multiscale equilibration
NASA Astrophysics Data System (ADS)
Detmold, W.; Endres, M. G.
2018-04-01
We investigate the lattice spacing dependence of the equilibration time for a recently proposed multiscale thermalization algorithm for Markov chain Monte Carlo simulations. The algorithm uses a renormalization-group matched coarse lattice action and prolongation operation to rapidly thermalize decorrelated initial configurations for evolution using a corresponding target lattice action defined at a finer scale. Focusing on nontopological long-distance observables in pure S U (3 ) gauge theory, we provide quantitative evidence that the slow modes of the Markov process, which provide the dominant contribution to the rethermalization time, have a suppressed contribution toward the continuum limit, despite their associated timescales increasing. Based on these numerical investigations, we conjecture that the prolongation operation used herein will produce ensembles that are indistinguishable from the target fine-action distribution for a sufficiently fine coupling at a given level of statistical precision, thereby eliminating the cost of rethermalization.
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicolaas; Chiuta, Steven; Bessarabov, Dmitri G.
2018-05-01
The experimental evaluation of an autothermal microchannel reactor for H2 production from NH3 decomposition is described. The reactor design incorporates an autothermal approach, with added NH3 oxidation, for coupled heat supply to the endothermic decomposition reaction. An alternating catalytic plate arrangement is used to accomplish this thermal coupling in a cocurrent flow strategy. Detailed analysis of the transient operating regime associated with reactor start-up and steady-state results is presented. The effects of operating parameters on reactor performance are investigated, specifically, the NH3 decomposition flow rate, NH3 oxidation flow rate, and fuel-oxygen equivalence ratio. Overall, the reactor exhibits rapid response time during start-up; within 60 min, H2 production is approximately 95% of steady-state values. The recommended operating point for steady-state H2 production corresponds to an NH3 decomposition flow rate of 6 NL min-1, NH3 oxidation flow rate of 4 NL min-1, and fuel-oxygen equivalence ratio of 1.4. Under these flows, NH3 conversion of 99.8% and H2 equivalent fuel cell power output of 0.71 kWe is achieved. The reactor shows good heat utilization with a thermal efficiency of 75.9%. An efficient autothermal reactor design is therefore demonstrated, which may be upscaled to a multi-kW H2 production system for commercial implementation.
Chen, Xiaoling; Xie, Ping; Zhang, Yuanyuan; Chen, Yuling; Yang, Fangmei; Zhang, Litai; Li, Xiaoli
2018-01-01
Recently, functional corticomuscular coupling (FCMC) between the cortex and the contralateral muscle has been used to evaluate motor function after stroke. As we know, the motor-control system is a closed-loop system that is regulated by complex self-regulating and interactive mechanisms which operate in multiple spatial and temporal scales. Multiscale analysis can represent the inherent complexity. However, previous studies in FCMC for stroke patients mainly focused on the coupling strength in single-time scale, without considering the changes of the inherently directional and multiscale properties in sensorimotor systems. In this paper, a multiscale-causal model, named multiscale transfer entropy, was used to quantify the functional connection between electroencephalogram over the scalp and electromyogram from the flexor digitorum superficialis (FDS) recorded simultaneously during steady-state grip task in eight stroke patients and eight healthy controls. Our results showed that healthy controls exhibited higher coupling when the scale reached up to about 12, and the FCMC in descending direction was stronger at certain scales (1, 7, 12, and 14) than that in ascending direction. Further analysis showed these multi-time scale characteristics mainly focused on the beta1 band at scale 11 and beta2 band at scale 9, 11, 13, and 15. Compared to controls, the multiscale properties of the FCMC for stroke were changed, the strengths in both directions were reduced, and the gaps between the descending and ascending directions were disappeared over all scales. Further analysis in specific bands showed that the reduced FCMC mainly focused on the alpha2 at higher scale, beta1 and beta2 across almost the entire scales. This study about multi-scale confirms that the FCMC between the brain and muscles is capable of complex and directional characteristics, and these characteristics in functional connection for stroke are destroyed by the structural lesion in the brain that might disrupt coordination, feedback, and information transmission in efferent control and afferent feedback. The study demonstrates for the first time the multiscale and directional characteristics of the FCMC for stroke patients, and provides a preliminary observation for application in clinical assessment following stroke. PMID:29765351
Multiscale Methods for Nuclear Reactor Analysis
NASA Astrophysics Data System (ADS)
Collins, Benjamin S.
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques, that use diffusion theory and pin power reconstruction (PPR). Two different multiscale methods were developed and analyzed; the post-refinement multiscale method and the embedded multiscale method. The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is "post-refinement" and thus has no impact on the global solution. The embedded multiscale method allows the local solver to change the global solution to provide an improved global and local solution. The post-refinement multiscale method is assessed using three core designs. When the local solution has more energy groups, the fixed source method has some difficulties near the interface: however the albedo method works well for all cases. In order to remedy the issue with boundary condition errors for the fixed source method, a buffer region is used to act as a filter, which decreases the sensitivity of the solution to the boundary condition. Both the albedo and fixed source methods benefit from the use of a buffer region. Unlike the post-refinement method, the embedded multiscale method alters the global solution. The ability to change the global solution allows for refinement in areas where the errors in the few group nodal diffusion are typically large. The embedded method is shown to improve the global solution when it is applied to a MOX/LEU assembly interface, the fuel/reflector interface, and assemblies where control rods are inserted. The embedded method also allows for multiple solution levels to be applied in a single calculation. The addition of intermediate levels to the solution improves the accuracy of the method. Both multiscale methods considered here have benefits and drawbacks, but both can provide improvements over the current PPR methodology.
Contribution of regional-scale fire events to ozone and PM2.5 ...
Two specific fires from 2011 are tracked for local to regional scale contribution to ozone (O3) and fine particulate matter (PM2.5) using a freely available regulatory modeling system that includes the BlueSky wildland fire emissions tool, Spare Matrix Operator Kernel Emissions (SMOKE) model, Weather and Research Forecasting (WRF) meteorological model, and Community Multiscale Air Quality (CMAQ) photochemical grid model. The modeling system was applied to track the contribution from a wildfire (Wallow) and prescribed fire (Flint Hills) using both source sensitivity and source apportionment approaches. The model estimated fire contribution to primary and secondary pollutants are comparable using source sensitivity (brute-force zero out) and source apportionment (Integrated Source Apportionment Method) approaches. Model estimated O3 enhancement relative to CO is similar to values reported in literature indicating the modeling system captures the range of O3 inhibition possible near fires and O3 production both near the fire and downwind. O3 and peroxyacetyl nitrate (PAN) are formed in the fire plume and transported downwind along with highly reactive VOC species such as formaldehyde and acetaldehyde that are both emitted by the fire and rapidly produced in the fire plume by VOC oxidation reactions. PAN and aldehydes contribute to continued downwind O3 production. The transport and thermal decomposition of PAN to nitrogen oxides (NOX) enables O3 production in areas
Huang, Cheng-Wei; Sue, Pei-Der; Abbod, Maysam F; Jiang, Bernard C; Shieh, Jiann-Shing
2013-08-08
To assess the improvement of human body balance, a low cost and portable measuring device of center of pressure (COP), known as center of pressure and complexity monitoring system (CPCMS), has been developed for data logging and analysis. In order to prove that the system can estimate the different magnitude of different sways in comparison with the commercial Advanced Mechanical Technology Incorporation (AMTI) system, four sway tests have been developed (i.e., eyes open, eyes closed, eyes open with water pad, and eyes closed with water pad) to produce different sway displacements. Firstly, static and dynamic tests were conducted to investigate the feasibility of the system. Then, correlation tests of the CPCMS and AMTI systems have been compared with four sway tests. The results are within the acceptable range. Furthermore, multivariate empirical mode decomposition (MEMD) and enhanced multivariate multiscale entropy (MMSE) analysis methods have been used to analyze COP data reported by the CPCMS and compare it with the AMTI system. The improvements of the CPCMS are 35% to 70% (open eyes test) and 60% to 70% (eyes closed test) with and without water pad. The AMTI system has shown an improvement of 40% to 80% (open eyes test) and 65% to 75% (closed eyes test). The results indicate that the CPCMS system can achieve similar results to the commercial product so it can determine the balance.
NASA Astrophysics Data System (ADS)
Pellouchoud, Lenson; Reed, Evan
2014-03-01
With continual improvements in ultrafast optical spectroscopy and new multi-scale methods for simulating chemistry for hundreds of picoseconds, the opportunity is beginning to exist to connect experiments with simulations on the same timescale. We compute the optical properties of the liquid phase energetic material nitromethane (CH3NO2) for the first 100 picoseconds behind the front of a simulated shock at 6.5km/s, close to the experimentally observed detonation shock speed. We utilize molecular dynamics trajectories computed using the multi-scale shock technique (MSST) for time-resolved optical spectrum calculations based on both linear response time-dependent DFT (TDDFT) and the Kubo-Greenwood (KG) formula within Kohn-Sham DFT. We find that TDDFT predicts optical conductivities 25-35% lower than KG-based values and provides better agreement with the experimentally measured index of refraction of unreacted nitromethane. We investigate the influence of electronic temperature on the KG spectra and find no significant effect at optical wavelengths. With all methods, the spectra evolve non-monotonically in time as shock-induced chemistry takes place. We attribute the time-resolved absorption at optical wavelengths to time-dependent populations of molecular decomposition products, including NO, CNO, CNOH, H2O, and larger molecules. Supported by NASA Space Technology Research Fellowship (NSTRF) #NNX12AM48H.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
Huang, Cheng-Wei; Sue, Pei-Der; Abbod, Maysam F.; Jiang, Bernard C.; Shieh, Jiann-Shing
2013-01-01
To assess the improvement of human body balance, a low cost and portable measuring device of center of pressure (COP), known as center of pressure and complexity monitoring system (CPCMS), has been developed for data logging and analysis. In order to prove that the system can estimate the different magnitude of different sways in comparison with the commercial Advanced Mechanical Technology Incorporation (AMTI) system, four sway tests have been developed (i.e., eyes open, eyes closed, eyes open with water pad, and eyes closed with water pad) to produce different sway displacements. Firstly, static and dynamic tests were conducted to investigate the feasibility of the system. Then, correlation tests of the CPCMS and AMTI systems have been compared with four sway tests. The results are within the acceptable range. Furthermore, multivariate empirical mode decomposition (MEMD) and enhanced multivariate multiscale entropy (MMSE) analysis methods have been used to analyze COP data reported by the CPCMS and compare it with the AMTI system. The improvements of the CPCMS are 35% to 70% (open eyes test) and 60% to 70% (eyes closed test) with and without water pad. The AMTI system has shown an improvement of 40% to 80% (open eyes test) and 65% to 75% (closed eyes test). The results indicate that the CPCMS system can achieve similar results to the commercial product so it can determine the balance. PMID:23966184
A general CFD framework for fault-resilient simulations based on multi-resolution information fusion
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-10-01
We develop a general CFD framework for multi-resolution simulations to target multiscale problems but also resilience in exascale simulations, where faulty processors may lead to gappy, in space-time, simulated fields. We combine approximation theory and domain decomposition together with statistical learning techniques, e.g. coKriging, to estimate boundary conditions and minimize communications by performing independent parallel runs. To demonstrate this new simulation approach, we consider two benchmark problems. First, we solve the heat equation (a) on a small number of spatial "patches" distributed across the domain, simulated by finite differences at fine resolution and (b) on the entire domain simulated at very low resolution, thus fusing multi-resolution models to obtain the final answer. Second, we simulate the flow in a lid-driven cavity in an analogous fashion, by fusing finite difference solutions obtained with fine and low resolution assuming gappy data sets. We investigate the influence of various parameters for this framework, including the correlation kernel, the size of a buffer employed in estimating boundary conditions, the coarseness of the resolution of auxiliary data, and the communication frequency across different patches in fusing the information at different resolution levels. In addition to its robustness and resilience, the new framework can be employed to generalize previous multiscale approaches involving heterogeneous discretizations or even fundamentally different flow descriptions, e.g. in continuum-atomistic simulations.
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-01-01
In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.
MEMD-enhanced multivariate fuzzy entropy for the evaluation of complexity in biomedical signals.
Azami, Hamed; Smith, Keith; Escudero, Javier
2016-08-01
Multivariate multiscale entropy (mvMSE) has been proposed as a combination of the coarse-graining process and multivariate sample entropy (mvSE) to quantify the irregularity of multivariate signals. However, both the coarse-graining process and mvSE may not be reliable for short signals. Although the coarse-graining process can be replaced with multivariate empirical mode decomposition (MEMD), the relative instability of mvSE for short signals remains a problem. Here, we address this issue by proposing the multivariate fuzzy entropy (mvFE) with a new fuzzy membership function. The results using white Gaussian noise show that the mvFE leads to more reliable and stable results, especially for short signals, in comparison with mvSE. Accordingly, we propose MEMD-enhanced mvFE to quantify the complexity of signals. The characteristics of brain regions influenced by partial epilepsy are investigated by focal and non-focal electroencephalogram (EEG) time series. In this sense, the proposed MEMD-enhanced mvFE and mvSE are employed to discriminate focal EEG signals from non-focal ones. The results demonstrate the MEMD-enhanced mvFE values have a smaller coefficient of variation in comparison with those obtained by the MEMD-enhanced mvSE, even for long signals. The results also show that the MEMD-enhanced mvFE has better performance to quantify focal and non-focal signals compared with multivariate multiscale permutation entropy.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
The research of selection model based on LOD in multi-scale display of electronic map
NASA Astrophysics Data System (ADS)
Zhang, Jinming; You, Xiong; Liu, Yingzhen
2008-10-01
This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.
Application of singular value decomposition to structural dynamics systems with constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Pinson, L. D.
1985-01-01
Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
NASA Astrophysics Data System (ADS)
Privault, Nicolas
2016-05-01
We construct differential forms of all orders and a covariant derivative together with its adjoint on the probability space of a standard Poisson process, using derivation operators. In this framewok we derive a de Rham-Hodge-Kodaira decomposition as well as Weitzenböck and Clark-Ocone formulas for random differential forms. As in the Wiener space setting, this construction provides two distinct approaches to the vanishing of harmonic differential forms.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
A multiscale physical model for the transient analysis of PEM water electrolyzer anodes.
Oliveira, Luiz Fernando L; Laref, Slimane; Mayousse, Eric; Jallut, Christian; Franco, Alejandro A
2012-08-07
Polymer electrolyte membrane water electrolyzers (PEMWEs) are electrochemical devices that can be used for the production of hydrogen. In a PEMWE the anode is the most complex electrode to study due to the high overpotential of the oxygen evolution reaction (OER), not widely understood. A physical bottom-up multi-scale transient model describing the operation of a PEMWE anode is proposed here. This model includes a detailed description of the elementary OER kinetics in the anode, a description of the non-equilibrium behavior of the nanoscale catalyst-electrolyte interface, and a microstructural-resolved description of the transport of charges and O(2) at the micro and mesoscales along the whole anode. The impact of different catalyst materials on the performance of the PEMWE anode, and a study of sensitivity to the operation conditions are evaluated from numerical simulations and the results are discussed in comparison with experimental data.
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.
NASA Astrophysics Data System (ADS)
Chen, X.
2016-12-01
This study present a multi-scale approach combining Mode Decomposition and Variance Matching (MDVM) method and basic process of Point-by-Point Regression (PPR) method. Different from the widely applied PPR method, the scanning radius for each grid box, were re-calculated considering the impact from topography (i.e. mean altitudes and fluctuations). Thus, appropriate proxy records were selected to be candidates for reconstruction. The results of this multi-scale methodology could not only provide the reconstructed gridded temperature, but also the corresponding uncertainties of the four typical timescales. In addition, this method can bring in another advantage that spatial distribution of the uncertainty for different scales could be quantified. To interpreting the necessity of scale separation in calibration, with proxy records location over Eastern Asia, we perform two sets of pseudo proxy experiments (PPEs) based on different ensembles of climate model simulation. One consist of 7 simulated results by 5 models (BCC-CSM1-1, CSIRO-MK3L-1-2, HadCM3, MPI-ESM-P, and Giss-E2-R) of the "past1000" simulation from Coupled Model Intercomparison Project Phase 5. The other is based on the simulations of Community Earth System Model Last Millennium Ensemble (CESM-LME). The pseudo-records network were obtained by adding the white noise with signal-to-noise ratio (SNR) increasing from 0.1 to 1.0 to the simulated true state and the locations mainly followed the PAGES-2k network in Asia. Totally, 400 years (1601-2000) simulation was used for calibration and 600 years (1001-1600) for verification. The reconstructed results were evaluated by three metrics 1) root mean squared error (RMSE), 2) correlation and 3) reduction of error (RE) score. The PPE verification results have shown that, in comparison with ordinary linear calibration method (variance matching), the RMSE and RE score of PPR-MDVM are improved, especially for the area with sparse proxy records. To be noted, in some periods with large volcanic activities, the RMSE of MDVM get larger than VM for higher SNR cases. It should be inferred that the volcanic eruptions might blur the intrinsic characteristics of multi-scales variabilities of the climate system and the MDVM method would show less advantage in that case.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
Volatility behavior of visibility graph EMD financial time series from Ising interacting system
NASA Astrophysics Data System (ADS)
Zhang, Bo; Wang, Jun; Fang, Wen
2015-08-01
A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji
2015-01-01
GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008
Design and Scheduling of Microgrids using Benders Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Ayyanar, Raja
2016-11-21
The distribution feeder laterals in a distribution feeder with relatively high PV generation as compared to the load can be operated as microgrids to achieve reliability, power quality and economic benefits. However, renewable resources are intermittent and stochastic in nature. A novel approach for sizing and scheduling an energy storage system and microturbine for reliable operation of microgrids is proposed. The size and schedule of an energy storage system and microturbine are determined using Benders' decomposition, considering PV generation as a stochastic resource.
NASA Astrophysics Data System (ADS)
Villegas, J. C.; Salazar, J. F.; Arias, P. A.; León, J. D.
2017-12-01
Land cover transformation is currently one of the most important challenges in tropical South America. These transformations occur both because of climate-related ecological perturbations, as well as in response to ongoing socio-economic processes. A fundamental difference between those two drivers is the spatial and temporal scale at which they operate. However, when considered in a larger context, both drivers affect the ability of ecosystems to provide fundamental services to society. In this work, we use a multi-scale approach to identify key-mechanisms through which land cover transformation significantly affects ecological, hydrological and ecoclimatological dynamics, potentially leading to loss of societally-critical regulation services. We propose a suite of examples spanning multiple spatial and temporal scales that illustrate the effects of land cover trnasformations in ecological, hydrological, biogeochemical and climatic functions in tropical South America. These examples highlight important global-change-effects management challenges, as well as the need to consider the feedbacks and interactions between multi-scale processes.
NASA Astrophysics Data System (ADS)
Andrus, Matthew
Stroke is a leading cause of death and disability in the United States, however, there remains no rapid diagnostic test for differentiating between ischemic and hemorrhagic stroke within the three-hour treatment window. Here we describe the design of a multiscale microfluidic module with an embedded time-of-flight nanosensor for the clinical diagnosis of stroke. The nanosensor described utilizes two synthetic pores in series, relying on resistive pulse sensing (RPS) to measure the passage of molecules through the time-of-flight tube. Once the nanosensor design was completed, a multiscale module to process patient samples and house the sensors was designed in a similar iterative process. This design utilized pillar arrays, called "pixels" to immobilize oligonucleotides from patient samples for ligase detection reactions (LDR) to be carried out. COMSOL simulations were performed to understand the operation and behavior of both the nanosensor and the modular chip once the designs were completed.
Multiscale Models for the Two-Stream Instability
NASA Astrophysics Data System (ADS)
Joseph, Ilon; Dimits, Andris; Banks, Jeffrey; Berger, Richard; Brunner, Stephan; Chapman, Thomas
2017-10-01
Interpenetrating streams of plasma found in many important scenarios in nature and in the laboratory can develop kinetic two-stream instabilities that exchange momentum and energy between the streams. A quasilinear model for the electrostatic two-stream instability is under development as a component of a multiscale model that couples fluid simulations to kinetic theory. Parameters of the model will be validated with comparison to full kinetic simulations using LOKI and efficient strategies for numerical solution of the quasilinear model and for coupling to the fluid model will be discussed. Extending the kinetic models into the collisional regime requires an efficient treatment of the collision operator. Useful reductions of the collision operator relative to the full multi-species Landau-Fokker-Plank operator are being explored. These are further motivated both by careful consideration of the parameter orderings relevant to two-stream scenarios and by the particular 2D+2V phase space used in the LOKI code. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 17- ERD-081.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
Nucleon Momentum and Spin Decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Y. M.
We construct a gauge invariant canonical momentum operator which satisfies the canonical commutation relation to resolve the old controversy on the canonical versus kinematic momentum of a charged particle in gauge theories. With this we show how to obtain the gauge independent momentum and spin decompositions of composite particles to those of the constituents in QED and QCD, which has been thought to be impossible. Moerover, we show that there are two logically acceptable nucleom momentum and spin decompositions, depending on which gluons we identify as the constituent of nucleons.
Differential morphology and image processing.
Maragos, P
1996-01-01
Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.
DOT National Transportation Integrated Search
2013-10-21
Today many intersections are operated based on data input from nonintrusive video detection systems. With those systems the video detectors can be easily deployed/modified for different application requirements. This research project is initiated to ...
Corsini, Chiara; Baker, Catriona; Kung, Ethan; Schievano, Silvia; Arbia, Gregory; Baretta, Alessia; Biglino, Giovanni; Migliavacca, Francesco; Dubini, Gabriele; Pennati, Giancarlo; Marsden, Alison; Vignon-Clementel, Irene; Taylor, Andrew; Hsia, Tain-Yen; Dorfman, Adam
2014-01-01
In patients with congenital heart disease and a single ventricle (SV), ventricular support of the circulation is inadequate, and staged palliative surgery (usually 3 stages) is needed for treatment. In the various palliative surgical stages individual differences in the circulation are important and patient-specific surgical planning is ideal. In this study, an integrated approach between clinicians and engineers has been developed, based on patient-specific multi-scale models, and is here applied to predict stage 2 surgical outcomes. This approach involves four distinct steps: (1) collection of pre-operative clinical data from a patient presenting for SV palliation, (2) construction of the pre-operative model, (3) creation of feasible virtual surgical options which couple a three-dimensional model of the surgical anatomy with a lumped parameter model (LPM) of the remainder of the circulation and (4) performance of post-operative simulations to aid clinical decision making. The pre-operative model is described, agreeing well with clinical flow tracings and mean pressures. Two surgical options (bi-directional Glenn and hemi-Fontan operations) are virtually performed and coupled to the pre-operative LPM, with the hemodynamics of both options reported. Results are validated against postoperative clinical data. Ultimately, this work represents the first patient-specific predictive modeling of stage 2 palliation using virtual surgery and closed-loop multi-scale modeling.
Entanglement branching operator
NASA Astrophysics Data System (ADS)
Harada, Kenji
2018-01-01
We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trushkin, A. N.; Kochetov, I. V.
The kinetic model of toluene decomposition in nonequilibrium low-temperature plasma generated by a pulse-periodic discharge operating in a mixture of nitrogen and oxygen is developed. The results of numerical simulation of plasma-chemical conversion of toluene are presented; the main processes responsible for C{sub 6}H{sub 5}CH{sub 3} decomposition are identified; the contribution of each process to total removal of toluene is determined; and the intermediate and final products of C{sub 6}H{sub 5}CH{sub 3} decomposition are identified. It was shown that toluene in pure nitrogen is mostly decomposed in its reactions with metastable N{sub 2}(A{sub 3}{Sigma}{sub u}{sup +}) and N{sub 2}(a Primemore » {sup 1}{Sigma}{sub u}{sup -}) molecules. In the presence of oxygen, in the N{sub 2} : O{sub 2} gas mixture, the largest contribution to C{sub 6}H{sub 5}CH{sub 3} removal is made by the hydroxyl radical OH which is generated in this mixture exclusively due to plasma-chemical reactions between toluene and oxygen decomposition products. Numerical simulation showed the existence of an optimum oxygen concentration in the mixture, at which toluene removal is maximum at a fixed energy deposition.« less
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
NASA Astrophysics Data System (ADS)
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Decomposition of the Multistatic Response Matrix and Target Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, D H
2008-02-14
Decomposition of the time-reversal operator for an array, or equivalently the singular value decomposition of the multistatic response matrix, has been used to improve imaging and localization of targets in complicated media. Typically, each singular value is associated with one scatterer even though it has been shown in several cases that a single scatterer can generate several singular values. In this paper we review the analysis of the time-reversal operator (TRO), or equivalently the multistatic response matrix (MRM), of an array system and a small target. We begin with two-dimensional scattering from a small cylinder then show the results formore » a small non-spherical target in three dimensions. We show that the number and magnitudes of the singular values contain information about target composition, shape, and orientation.« less
Study on Endurance and Performance of Impregnated Ruthenium Catalyst for Thruster System.
Kim, Jincheol; Kim, Taegyu
2018-02-01
Performance and endurance of the Ru catalyst were studied for nitrous oxide monopropellant thruster system. The thermal decomposition of N2O requires a considerably high temperature, which make it difficult to be utilized as a thruster propellant, while the propellant decomposition temperature can be reduced by using the catalyst through the decomposition reaction with the propellant. However, the catalyst used for the thruster was frequently exposed to high temperature and high-pressure environment. Therefore, the state change of the catalyst according to the thruster operation was analyzed. Characterization of catalyst used in the operation condition of the thruster was performed using FE-SEM and EDS. As a result, performance degradation was occurred due to the volatilization of Ru catalyst and reduction of the specific surface area according to the phase change of Al2O3.
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
Reliable structural information from multiscale decomposition with the Mellor-Brady filter
NASA Astrophysics Data System (ADS)
Szilágyi, Tünde; Brady, Michael
2009-08-01
Image-based medical diagnosis typically relies on the (poorly reproducible) subjective classification of textures in order to differentiate between diseased and healthy pathology. Clinicians claim that significant benefits would arise from quantitative measures to inform clinical decision making. The first step in generating such measures is to extract local image descriptors - from noise corrupted and often spatially and temporally coarse resolution medical signals - that are invariant to illumination, translation, scale and rotation of the features. The Dual-Tree Complex Wavelet Transform (DT-CWT) provides a wavelet multiresolution analysis (WMRA) tool e.g. in 2D with good properties, but has limited rotational selectivity. Also, it requires computationally-intensive steering due to the inherently 1D operations performed. The monogenic signal, which is defined in n >= 2D with the Riesz transform gives excellent orientation information without the need for steering. Recent work has suggested the Monogenic Riesz-Laplace wavelet transform as a possible tool for integrating these two concepts into a coherent mathematical framework. We have found that the proposed construction suffers from a lack of rotational invariance and is not optimal for retrieving local image descriptors. In this paper we show: 1. Local frequency and local phase from the monogenic signal are not equivalent, especially in the phase congruency model of a "feature", and so they are not interchangeable for medical image applications. 2. The accuracy of local phase computation may be improved by estimating the denoising parameters while maximizing a new measure of "featureness".
The computational complexity of elliptic curve integer sub-decomposition (ISD) method
NASA Astrophysics Data System (ADS)
Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza
2014-07-01
The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.
Hierarchical Petascale Simulation Framework for Stress Corrosion Cracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vashishta, Priya
2014-12-01
Reaction Dynamics in Energetic Materials: Detonation is a prototype of mechanochemistry, in which mechanically and thermally induced chemical reactions far from equilibrium exhibit vastly different behaviors. It is also one of the hardest multiscale physics problems, in which diverse length and time scales play important roles. The CACS group has performed multimillion-atom reactive MD simulations to reveal a novel two-stage reaction mechanism during the detonation of cyclotrimethylenetrinitramine (RDX) crystal. Rapid production of N2 and H2O within ~10 ps is followed by delayed production of CO molecules within ~ 1 ns. They found that further decomposition towards the final products ismore » inhibited by the formation of large metastable C- and O-rich clusters with fractal geometry. The CACS group has also simulated the oxidation dynamics of close-packed aggregates of aluminum nanoparticles passivated by oxide shells. Their simulation results suggest an unexpectedly active role of the oxide shell as a nanoreactor.« less
Multiscale Analysis of Rapidly Rotating Dynamo Simulations
NASA Astrophysics Data System (ADS)
Orvedahl, R.; Calkins, M. A.; Featherstone, N. A.
2017-12-01
The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code Rayleigh to generate a suite of direct numerical simulations. Each simulation uses the Boussinesq approximation and is characterized by an Ekman number (Ek=ν /Ω L2) of 10-5. We vary the degree of convective forcing to obtain a range of convective Rossby numbers. The resulting flows and magnetic structures are analyzed using a Reynolds decomposition. We determine the relative importance of each term in the scale-separated governing equations and estimate the relevant spatial scales responsible for generating the mean magnetic field.
NASA Astrophysics Data System (ADS)
Florindo, João. Batista
2018-04-01
This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.
Cotter, C. J.
2017-01-01
In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
3D Gabor wavelet based vessel filtering of photoacoustic images.
Haq, Israr Ul; Nagoaka, Ryo; Makino, Takahiro; Tabata, Takuya; Saijo, Yoshifumi
2016-08-01
Filtering and segmentation of vasculature is an important issue in medical imaging. The visualization of vasculature is crucial for the early diagnosis and therapy in numerous medical applications. This paper investigates the use of Gabor wavelet to enhance the effect of vasculature while eliminating the noise due to size, sensitivity and aperture of the detector in 3D Optical Resolution Photoacoustic Microscopy (OR-PAM). A detailed multi-scale analysis of wavelet filtering and Hessian based method is analyzed for extracting vessels of different sizes since the blood vessels usually vary with in a range of radii. The proposed algorithm first enhances the vasculature in the image and then tubular structures are classified by eigenvalue decomposition of the local Hessian matrix at each voxel in the image. The algorithm is tested on non-invasive experiments, which shows appreciable results to enhance vasculature in photo-acoustic images.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
NASA Technical Reports Server (NTRS)
John, Bonnie; Vera, Alonso; Matessa, Michael; Freed, Michael; Remington, Roger
2002-01-01
CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling. available to model human users, the Goals, Operators, Methods, and Selection (GOMS) method [6, 21] has been the most widely used, providing accurate, often zero-parameter, predictions of the routine performance of skilled users in a wide range of procedural tasks [6, 13, 15, 27, 28]. GOMS is meant to model routine behavior. The user is assumed to have methods that apply sequences of operators and to achieve a goal. Selection rules are applied when there is more than one method to achieve a goal. Many routine tasks lend themselves well to such decomposition. Decomposition produces a representation of the task as a set of nested goal states that include an initial state and a final state. The iterative decomposition into goals and nested subgoals can terminate in primitives of any desired granularity, the choice of level of detail dependent on the predictions required. Although GOMS has proven useful in HCI, tools to support the construction of GOMS models have not yet come into general use.
NASA Astrophysics Data System (ADS)
Cortés–Vega, Luis A.
2017-12-01
In this paper, we consider modular multiplicative inverse operators (MMIO)’s of the form: J(m+n):(ℤ/(m+n)ℤ)*→ℤ/(m+n)ℤ, J(m+n)(a)=a-1. A general method to decompose {{\\mathscr{J}}}(m+n)(.) over group of units {({{Z}}/(m+n){{Z}})}* is derived. As result, an interesting decomposition law for these operators over {({{Z}}/(m+n){{Z}})}* is established. Numerical examples illustring the new results are given. This, complement some recent results obtained by the author for (MMIO)’s defined over group of units of the form {({{Z}}/\\varrho {{Z}})}* with ϱ = m × n > 2.
Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data
2009-05-01
operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models
Jensen, Erik C.; Stockton, Amanda M.; Chiesl, Thomas N.; Kim, Jungkyu; Bera, Abhisek; Mathies, Richard A.
2013-01-01
A digitally programmable microfluidic Automaton consisting of a 2-dimensional array of pneumatically actuated microvalves is programmed to perform new multiscale mixing and sample processing operations. Large (µL-scale) volume processing operations are enabled by precise metering of multiple reagents within individual nL-scale valves followed by serial repetitive transfer to programmed locations in the array. A novel process exploiting new combining valve concepts is developed for continuous rapid and complete mixing of reagents in less than 800 ms. Mixing, transfer, storage, and rinsing operations are implemented combinatorially to achieve complex assay automation protocols. The practical utility of this technology is demonstrated by performing automated serial dilution for quantitative analysis as well as the first demonstration of on-chip fluorescent derivatization of biomarker targets (carboxylic acids) for microchip capillary electrophoresis on the Mars Organic Analyzer. A language is developed to describe how unit operations are combined to form a microfluidic program. Finally, this technology is used to develop a novel microfluidic 6-sample processor for combinatorial mixing of large sets (>26 unique combinations) of reagents. The digitally programmable microfluidic Automaton is a versatile programmable sample processor for a wide range of process volumes, for multiple samples, and for different types of analyses. PMID:23172232
Engineering Digestion: Multiscale Processes of Food Digestion.
Bornhorst, Gail M; Gouseti, Ourania; Wickham, Martin S J; Bakalis, Serafim
2016-03-01
Food digestion is a complex, multiscale process that has recently become of interest to the food industry due to the developing links between food and health or disease. Food digestion can be studied by using either in vitro or in vivo models, each having certain advantages or disadvantages. The recent interest in food digestion has resulted in a large number of studies in this area, yet few have provided an in-depth, quantitative description of digestion processes. To provide a framework to develop these quantitative comparisons, a summary is given here between digestion processes and parallel unit operations in the food and chemical industry. Characterization parameters and phenomena are suggested for each step of digestion. In addition to the quantitative characterization of digestion processes, the multiscale aspect of digestion must also be considered. In both food systems and the gastrointestinal tract, multiple length scales are involved in food breakdown, mixing, absorption. These different length scales influence digestion processes independently as well as through interrelated mechanisms. To facilitate optimized development of functional food products, a multiscale, engineering approach may be taken to describe food digestion processes. A framework for this approach is described in this review, as well as examples that demonstrate the importance of process characterization as well as the multiple, interrelated length scales in the digestion process. © 2016 Institute of Food Technologists®
Cutrì, Elena; Meoli, Alessio; Dubini, Gabriele; Migliavacca, Francesco; Hsia, Tain-Yen; Pennati, Giancarlo
2017-09-01
Hypoplastic left heart syndrome is a complex congenital heart disease characterised by the underdevelopment of the left ventricle normally treated with a three-stage surgical repair. In this study, a multiscale closed-loop cardio-circulatory model is created to reproduce the pre-operative condition of a patient suffering from such pathology and virtual surgery is performed. Firstly, cardio-circulatory parameters are estimated using a fully closed-loop cardio-circulatory lumped parameter model. Secondly, a 3D standalone FEA model is build up to obtain active and passive ventricular characteristics and unloaded reference state. Lastly, the 3D model of the single ventricle is coupled to the lumped parameter model of the circulation obtaining a multiscale closed-loop pre-operative model. Lacking any information on the fibre orientation, two cases were simulated: (i) fibre distributed as in the physiological right ventricle and (ii) fibre as in the physiological left ventricle. Once the pre-operative condition is satisfactorily simulated for the two cases, virtual surgery is performed. The post-operative results in the two cases highlighted similar hemodynamic behaviour but different local mechanics. This finding suggests that the knowledge of the patient-specific fibre arrangement is important to correctly estimate the single ventricle's working condition and consequently can be valuable to support clinical decision. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
NASA Astrophysics Data System (ADS)
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
Methanol decomposition bottoming cycle for IC engines
NASA Technical Reports Server (NTRS)
Purohit, G.; Houseman, J.
1979-01-01
This paper presents the concept of methanol decomposition using engine exhaust heat, and examines its potential for use in the operation of passenger cars, diesel trucks, and diesel-electric locomotives. Energy economy improvements of 10-20% are calculated over the representative driving cycles without a net loss in power. Some reductions in exhaust emissions are also projected.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Parallel processing for pitch splitting decomposition
NASA Astrophysics Data System (ADS)
Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris
2009-10-01
Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.
A Mathematical Motivation for Complex-Valued Convolutional Networks.
Tygert, Mark; Bruna, Joan; Chintala, Soumith; LeCun, Yann; Piantino, Serkan; Szlam, Arthur
2016-05-01
A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors, followed by (2) taking the absolute value of every entry of the resulting vectors, followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as data-driven multiscale windowed power spectra, data-driven multiscale windowed absolute spectra, data-driven multiwavelet absolute values, or (in their most general configuration) data-driven nonlinear multiwavelet packets. Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (e.g., logistic or tanh) nonlinearities, or max pooling, for example, do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.
PDF-based heterogeneous multiscale filtration model.
Gong, Jian; Rutland, Christopher J
2015-04-21
Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Technical Reports Server (NTRS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2015-01-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of approximately 100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SOC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and 'Scientist-in-the-Loop' (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
Magnetospheric Multiscale Instrument Suite Operations and Data System
NASA Astrophysics Data System (ADS)
Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.
2016-03-01
The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of ˜100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SDC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and "Scientist-in-the-Loop" (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.
EMISSIONS PROCESSING FOR THE ETA/ CMAQ AIR QUALITY FORECAST SYSTEM
NOAA and EPA have created an Air Quality Forecast (AQF) system. This AQF system links an adaptation of the EPA's Community Multiscale Air Quality Model with the 12 kilometer ETA model running operationally at NOAA's National Center for Environmental Predication (NCEP). One of the...
USING CMAQ-AIM TO EVALUATE THE GAS-PARTICLE PARTITIONING TREATMENT IN CMAQ
The Community Multi-scale Air Quality model (CMAQ) aerosol component utilizes a modal representation, where the size distribution is represented as a sum of three lognormal modes. Though the aerosol treatment in CMAQ is quite advanced compared to other operational air quality mo...
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.
NASA Astrophysics Data System (ADS)
Sridhar, J.
2015-12-01
The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.
An operational modal analysis method in frequency and spatial domain
NASA Astrophysics Data System (ADS)
Wang, Tong; Zhang, Lingmi; Tamura, Yukio
2005-12-01
A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.
Results from Navigator GPS Flight Testing for the Magnetospheric MultiScale Mission
NASA Technical Reports Server (NTRS)
Lulich, Tyler D.; Bamford, William A.; Wintermitz, Luke M. B.; Price, Samuel R.
2012-01-01
The recent delivery of the first Goddard Space Flight Center (GSFC) Navigator Global Positioning System (GPS) receivers to the Magnetospheric MultiScale (MMS) mission spacecraft is a high water mark crowning a decade of research and development in high-altitude space-based GPS. Preceding MMS delivery, the engineering team had developed receivers to support multiple missions and mission studies, such as Low Earth Orbit (LEO) navigation for the Global Precipitation Mission (GPM), above the constellation navigation for the Geostationary Operational Environmental Satellite (GOES) proof-of-concept studies, cis-Lunar navigation with rapid re-acquisition during re-entry for the Orion Project and an orbital demonstration on the Space Shuttle during the Hubble Servicing Mission (HSM-4).
Cross-scale phenological data integration to benefit resource management and monitoring
Richardson, Andrew D.; Weltzin, Jake F.; Morisette, Jeffrey T.
2017-01-01
Climate change is presenting new challenges for natural resource managers charged with maintaining sustainable ecosystems and landscapes. Phenology, a branch of science dealing with seasonal natural phenomena (bird migration or plant flowering in response to weather changes, for example), bridges the gap between the biosphere and the climate system. Phenological processes operate across scales that span orders of magnitude—from leaf to globe and from days to seasons—making phenology ideally suited to multiscale, multiplatform data integration and delivery of information at spatial and temporal scales suitable to inform resource management decisions.A workshop report: Workshop held June 2016 to investigate opportunities and challenges facing multi-scale, multi-platform integration of phenological data to support natural resource management decision-making.
THE EMISSION PROCESSING SYSTEM FOR THE ETA/CMAQ AIR QUALITY FORECAST SYSTEM
NOAA and EPA have created an Air Quality Forecast (AQF) system. This AQF system links an adaptation of the EPA's Community Multiscale Air Quality Model with the 12 kilometer ETA model running operationally at NOAA's National Center for Environmental Predication (NCEP). One of th...
Presentation slides provide background on model evaluation techniques. Also included in the presentation is an operational evaluation of 2001 Community Multiscale Air Quality (CMAQ) annual simulation, and an evaluation of PM2.5 for the CMAQ air quality forecast (AQF) ...
Observations and Measurements of Planktonic Bioluminescence in and Around a Milky Sea
1988-03-01
malticharnel analysers operating in the multiscaler mode. The details of both the onboard underway system and the LPTC systems have been published (Lapota...the Arabian Sea during the southwest monsoon. No nutrient data was collected during our study, yet phosphates, nitrates , and trace BIOLUMINESCENCE IN
Validation of Operational Multiscale Environment Model With Grid Adaptivity (OMEGA).
1995-12-01
Center for the period of the Chernobyl Nuclear Accident. The physics of the model is tested using National Weather Service Medium Range Forecast data by...Climatology Center for the first three days following the release at the Chernobyl Nuclear Plant. A user-defined source term was developed to simulate
NASA Astrophysics Data System (ADS)
Wu, Qiujie; Tan, Liu; Xu, Sen; Liu, Dabin; Min, Li
2018-04-01
Numerous accidents of emulsion explosive (EE) are attributed to uncontrolled thermal decomposition of ammonium nitrate emulsion (ANE, the intermediate of EE) and EE in large scale. In order to study the thermal decomposition characteristics of ANE and EE in different scales, a large-scale test of modified vented pipe test (MVPT), and two laboratory-scale tests of differential scanning calorimeter (DSC) and accelerating rate calorimeter (ARC) were applied in the present study. The scale effect and water effect both play an important role in the thermal stability of ANE and EE. The measured decomposition temperatures of ANE and EE in MVPT are 146°C and 144°C, respectively, much lower than those in DSC and ARC. As the size of the same sample in DSC, ARC, and MVPT successively increases, the onset temperatures decrease. In the same test, the measured onset temperature value of ANE is higher than that of EE. The water composition of the sample stabilizes the sample. The large-scale test of MVPT can provide information for the real-life operations. The large-scale operations have more risks, and continuous overheating should be avoided.
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
A novel ECG data compression method based on adaptive Fourier decomposition
NASA Astrophysics Data System (ADS)
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
NASA Astrophysics Data System (ADS)
Horstemeyer, M. F.
This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).
NASA Astrophysics Data System (ADS)
Dekavalla, Maria; Argialas, Demetre
2017-07-01
The analysis of undersea topography and geomorphological features provides necessary information to related disciplines and many applications. The development of an automated knowledge-based classification approach of undersea topography and geomorphological features is challenging due to their multi-scale nature. The aim of the study is to develop and evaluate an automated knowledge-based OBIA approach to: i) decompose the global undersea topography to multi-scale regions of distinct morphometric properties, and ii) assign the derived regions to characteristic geomorphological features. First, the global undersea topography was decomposed through the SRTM30_PLUS bathymetry data to the so-called morphometric objects of discrete morphometric properties and spatial scales defined by data-driven methods (local variance graphs and nested means) and multi-scale analysis. The derived morphometric objects were combined with additional relative topographic position information computed with a self-adaptive pattern recognition method (geomorphons), and auxiliary data and were assigned to characteristic undersea geomorphological feature classes through a knowledge base, developed from standard definitions. The decomposition of the SRTM30_PLUS data to morphometric objects was considered successful for the requirements of maximizing intra-object and inter-object heterogeneity, based on the near zero values of the Moran's I and the low values of the weighted variance index. The knowledge-based classification approach was tested for its transferability in six case studies of various tectonic settings and achieved the efficient extraction of 11 undersea geomorphological feature classes. The classification results for the six case studies were compared with the digital global seafloor geomorphic features map (GSFM). The 11 undersea feature classes and their producer's accuracies in respect to the GSFM relevant areas were Basin (95%), Continental Shelf (94.9%), Trough (88.4%), Plateau (78.9%), Continental Slope (76.4%), Trench (71.2%), Abyssal Hill (62.9%), Abyssal Plain (62.4%), Ridge (49.8%), Seamount (48.8%) and Continental Rise (25.4%). The knowledge-based OBIA classification approach was considered transferable since the percentages of spatial and thematic agreement between the most of the classified undersea feature classes and the GSFM exhibited low deviations across the six case studies.
Understanding electrical conduction in lithium ion batteries through multi-scale modeling
NASA Astrophysics Data System (ADS)
Pan, Jie
Silicon (Si) has been considered as a promising negative electrode material for lithium ion batteries (LIBs) because of its high theoretical capacity, low discharge voltage, and low cost. However, the utilization of Si electrode has been hampered by problems such as slow ionic transport, large stress/strain generation, and unstable solid electrolyte interphase (SEI). These problems severely influence the performance and cycle life of Si electrodes. In general, ionic conduction determines the rate performance of the electrode, while electron leakage through the SEI causes electrolyte decomposition and, thus, causes capacity loss. The goal of this thesis research is to design Si electrodes with high current efficiency and durability through a fundamental understanding of the ionic and electronic conduction in Si and its SEI. Multi-scale physical and chemical processes occur in the electrode during charging and discharging. This thesis, thus, focuses on multi-scale modeling, including developing new methods, to help understand these coupled physical and chemical processes. For example, we developed a new method based on ab initio molecular dynamics to study the effects of stress/strain on Li ion transport in amorphous lithiated Si electrodes. This method not only quantitatively shows the effect of stress on ionic transport in amorphous materials, but also uncovers the underlying atomistic mechanisms. However, the origin of ionic conduction in the inorganic components in SEI is different from that in the amorphous Si electrode. To tackle this problem, we developed a model by separating the problem into two scales: 1) atomistic scale: defect physics and transport in individual SEI components with consideration of the environment, e.g., LiF in equilibrium with Si electrode; 2) mesoscopic scale: defect distribution near the heterogeneous interface based on a space charge model. In addition, to help design better artificial SEI, we further demonstrated a theoretical design of multicomponent SEIs by utilizing the synergetic effect found in the natural SEI. We show that the electrical conduction can be optimized by varying the grain size and volume fraction of two phases in the artificial multicomponent SEI.
A general multiscale framework for the emergent effective elastodynamics of metamaterials
NASA Astrophysics Data System (ADS)
Sridhar, A.; Kouznetsova, V. G.; Geers, M. G. D.
2018-02-01
This paper presents a general multiscale framework towards the computation of the emergent effective elastodynamics of heterogeneous materials, to be applied for the analysis of acoustic metamaterials and phononic crystals. The generality of the framework is exemplified by two key characteristics. First, the underlying formalism relies on the Floquet-Bloch theorem to derive a robust definition of scales and scale separation. Second, unlike most homogenization approaches that rely on a classical volume average, a generalized homogenization operator is defined with respect to a family of particular projection functions. This yields a generalized macro-scale continuum, instead of the classical Cauchy continuum. This enables (in a micromorphic sense) to homogenize the rich dispersive behavior resulting from both Bragg scattering and local resonance. For an arbitrary unit cell, the homogenization projection functions are constructed using the Floquet-Bloch eigenvectors obtained in the desired frequency regime at select high symmetry points, which effectively resolves the emergent phenomena dominating that regime. Furthermore, a generalized Hill-Mandel condition is proposed that ensures power consistency between the homogenized and full-scale model. A high-order spatio-temporal gradient expansion is used to localize the multiscale problem leading to a series of recursive unit cell problems giving the appropriate micro-mechanical corrections. The developed multiscale method is validated against standard numerical Bloch analysis of the dispersion spectra of example unit cells encompassing multiple high-order branches generated by local resonance and/or Bragg scattering.
Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei
2013-07-01
Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.
Structural dynamics of the lac repressor-DNA complex revealed by a multiscale simulation.
Villa, Elizabeth; Balaeff, Alexander; Schulten, Klaus
2005-05-10
A multiscale simulation of a complex between the lac repressor protein (LacI) and a 107-bp-long DNA segment is reported. The complex between the repressor and two operator DNA segments is described by all-atom molecular dynamics; the size of the simulated system comprises either 226,000 or 314,000 atoms. The DNA loop connecting the operators is modeled as a continuous elastic ribbon, described mathematically by the nonlinear Kirchhoff differential equations with boundary conditions obtained from the coordinates of the terminal base pairs of each operator. The forces stemming from the looped DNA are included in the molecular dynamics simulations; the loop structure and the forces are continuously recomputed because the protein motions during the simulations shift the operators and the presumed termini of the loop. The simulations reveal the structural dynamics of the LacI-DNA complex in unprecedented detail. The multiple domains of LacI exhibit remarkable structural stability during the simulation, moving much like rigid bodies. LacI is shown to absorb the strain from the looped DNA mainly through its mobile DNA-binding head groups. Even with large fluctuating forces applied, the head groups tilt strongly and keep their grip on the operator DNA, while the remainder of the protein retains its V-shaped structure. A simulated opening of the cleft of LacI by 500-pN forces revealed the interactions responsible for locking LacI in the V-conformation.
Mode decomposition and Lagrangian structures of the flow dynamics in orbitally shaken bioreactors
NASA Astrophysics Data System (ADS)
Weheliye, Weheliye Hashi; Cagney, Neil; Rodriguez, Gregorio; Micheletti, Martina; Ducci, Andrea
2018-03-01
In this study, two mode decomposition techniques were applied and compared to assess the flow dynamics in an orbital shaken bioreactor (OSB) of cylindrical geometry and flat bottom: proper orthogonal decomposition and dynamic mode decomposition. Particle Image Velocimetry (PIV) experiments were carried out for different operating conditions including fluid height, h, and shaker rotational speed, N. A detailed flow analysis is provided for conditions when the fluid and vessel motions are in-phase (Fr = 0.23) and out-of-phase (Fr = 0.47). PIV measurements in vertical and horizontal planes were combined to reconstruct low order models of the full 3D flow and to determine its Finite-Time Lyapunov Exponent (FTLE) within OSBs. The combined results from the mode decomposition and the FTLE fields provide a useful insight into the flow dynamics and Lagrangian coherent structures in OSBs and offer a valuable tool to optimise bioprocess design in terms of mixing and cell suspension.
X-Ray Thomson Scattering Without the Chihara Decomposition
NASA Astrophysics Data System (ADS)
Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration
X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire
2017-02-01
In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.
Review on modeling of the anode solid electrolyte interphase (SEI) for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Wang, Aiping; Kadam, Sanket; Li, Hong; Shi, Siqi; Qi, Yue
2018-03-01
A passivation layer called the solid electrolyte interphase (SEI) is formed on electrode surfaces from decomposition products of electrolytes. The SEI allows Li+ transport and blocks electrons in order to prevent further electrolyte decomposition and ensure continued electrochemical reactions. The formation and growth mechanism of the nanometer thick SEI films are yet to be completely understood owing to their complex structure and lack of reliable in situ experimental techniques. Significant advances in computational methods have made it possible to predictively model the fundamentals of SEI. This review aims to give an overview of state-of-the-art modeling progress in the investigation of SEI films on the anodes, ranging from electronic structure calculations to mesoscale modeling, covering the thermodynamics and kinetics of electrolyte reduction reactions, SEI formation, modification through electrolyte design, correlation of SEI properties with battery performance, and the artificial SEI design. Multi-scale simulations have been summarized and compared with each other as well as with experiments. Computational details of the fundamental properties of SEI, such as electron tunneling, Li-ion transport, chemical/mechanical stability of the bulk SEI and electrode/(SEI/) electrolyte interfaces have been discussed. This review shows the potential of computational approaches in the deconvolution of SEI properties and design of artificial SEI. We believe that computational modeling can be integrated with experiments to complement each other and lead to a better understanding of the complex SEI for the development of a highly efficient battery in the future.
Fast multi-scale feature fusion for ECG heartbeat classification
NASA Astrophysics Data System (ADS)
Ai, Danni; Yang, Jian; Wang, Zeyu; Fan, Jingfan; Ai, Changbin; Wang, Yongtian
2015-12-01
Electrocardiogram (ECG) is conducted to monitor the electrical activity of the heart by presenting small amplitude and duration signals; as a result, hidden information present in ECG data is difficult to determine. However, this concealed information can be used to detect abnormalities. In our study, a fast feature-fusion method of ECG heartbeat classification based on multi-linear subspace learning is proposed. The method consists of four stages. First, baseline and high frequencies are removed to segment heartbeat. Second, as an extension of wavelets, wavelet-packet decomposition is conducted to extract features. With wavelet-packet decomposition, good time and frequency resolutions can be provided simultaneously. Third, decomposed confidences are arranged as a two-way tensor, in which feature fusion is directly implemented with generalized N dimensional ICA (GND-ICA). In this method, co-relationship among different data information is considered, and disadvantages of dimensionality are prevented; this method can also be used to reduce computing compared with linear subspace-learning methods (PCA). Finally, support vector machine (SVM) is considered as a classifier in heartbeat classification. In this study, ECG records are obtained from the MIT-BIT arrhythmia database. Four main heartbeat classes are used to examine the proposed algorithm. Based on the results of five measurements, sensitivity, positive predictivity, accuracy, average accuracy, and t-test, our conclusion is that a GND-ICA-based strategy can be used to provide enhanced ECG heartbeat classification. Furthermore, large redundant features are eliminated, and classification time is reduced.
Spectral decomposition of nonlinear systems with memory
NASA Astrophysics Data System (ADS)
Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.
2016-02-01
We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.
The Roadmaker's algorithm for the discrete pulse transform.
Laurie, Dirk P
2011-02-01
The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker's algorithm has optimal complexity: it runs in time O(m), where m is the number of arcs in the graph.
Optical systolic solutions of linear algebraic equations
NASA Technical Reports Server (NTRS)
Neuman, C. P.; Casasent, D.
1984-01-01
The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.
Investigation on an ammonia supply system for flue gas denitrification of low-speed marine diesel
Yuan, Han; Zhao, Jian; Mei, Ning
2017-01-01
Low-speed marine diesel flue gas denitrification is in great demand in the ship transport industry. This research proposes an ammonia supply system which can be used for flue gas denitrification of low-speed marine diesel. In this proposed ammonia supply system, ammonium bicarbonate is selected as the ammonia carrier to produce ammonia and carbon dioxide by thermal decomposition. The diesel engine exhaust heat is used as the heating source for ammonium bicarbonate decomposition and ammonia gas desorption. As the ammonium bicarbonate decomposition is critical to the proper operation of this system, effects have been observed to reveal the performance of the thermal decomposition chamber in this paper. A visualization experiment for determination of the single-tube heat transfer coefficient and simulation of flow and heat transfer in two structures is conducted; the decomposition of ammonium bicarbonate is simulated by ASPEN PLUS. The results show that the single-tube heat transfer coefficient is 1052 W m2 °C−1; the thermal decomposition chamber fork-type structure gets a higher heat transfer compared with the row-type. With regard to the simulation of ammonium bicarbonate thermal decomposition, the ammonia production is significantly affected by the reaction temperature and the mass flow rate of the ammonium bicarbonate input. PMID:29308269
Investigation on an ammonia supply system for flue gas denitrification of low-speed marine diesel
NASA Astrophysics Data System (ADS)
Huang, Xiankun; Yuan, Han; Zhao, Jian; Mei, Ning
2017-12-01
Low-speed marine diesel flue gas denitrification is in great demand in the ship transport industry. This research proposes an ammonia supply system which can be used for flue gas denitrification of low-speed marine diesel. In this proposed ammonia supply system, ammonium bicarbonate is selected as the ammonia carrier to produce ammonia and carbon dioxide by thermal decomposition. The diesel engine exhaust heat is used as the heating source for ammonium bicarbonate decomposition and ammonia gas desorption. As the ammonium bicarbonate decomposition is critical to the proper operation of this system, effects have been observed to reveal the performance of the thermal decomposition chamber in this paper. A visualization experiment for determination of the single-tube heat transfer coefficient and simulation of flow and heat transfer in two structures is conducted; the decomposition of ammonium bicarbonate is simulated by ASPEN PLUS. The results show that the single-tube heat transfer coefficient is 1052 W m2 °C-1; the thermal decomposition chamber fork-type structure gets a higher heat transfer compared with the row-type. With regard to the simulation of ammonium bicarbonate thermal decomposition, the ammonia production is significantly affected by the reaction temperature and the mass flow rate of the ammonium bicarbonate input.
The National Air Quality Forecast Capacity (NAQFC) system, which links NOAA's North American Mesoscale (NAM) meteorological model with EPA's Community Multiscale Air Quality (CMAQ) model, provided operational ozone (O3) and experimental fine particular matter (PM2...
Areas with close proximity to oil and natural gas operations in rural Utah have experienced winter ozone levels that exceed EPA’s National Ambient Air Quality Standards (NAAQS). Through a collaborative effort, EPA Region 8 – Air Program, ORD, and OAQPS used the Commun...
NASA Technical Reports Server (NTRS)
Queen, Steven Z.
2015-01-01
The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories, elliptically orbiting the Earth in a tetrahedron formation. For the operational success of the mission, on-board systems must be able to deliver high-precision orbital adjustment maneuvers. On MMS, this is accomplished using feedback from on-board star sensors in tandem with accelerometers whose measurements are dynamically corrected for errors associated with a spinning platform. In order to determine the required corrections to the measured acceleration, precise estimates of attitude, rate, and mass-properties are necessary. To this end, both an on-board and ground-based Multiplicative Extended Kalman Filter (MEKF) were formulated and implemented in order to estimate the dynamic and quasi-static properties of the spacecraft.
Magnetospheric Multiscale (MMS)
2017-12-08
MMS Four Separate – View of all four spacecraft in the MMS Cleanroom getting prepared for stacking operations. Learn more about MMS at www.nasa.gov/mms Credit NASA/Chris Gunn The Magnetospheric Multiscale, or MMS, will study how the sun and the Earth's magnetic fields connect and disconnect, an explosive process that can accelerate particles through space to nearly the speed of light. This process is called magnetic reconnection and can occur throughout all space. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Towards Multiscale Interactions Between Tearing Modes and Microturbulence
NASA Astrophysics Data System (ADS)
Williams, Z. R.; Pueschel, M. J.; Terry, P. W.
2017-10-01
Work on the Madison Symmetric Torus Reversed-Field Pinch (RFP) has shown that large-scale tearing modes present in standard operation are highly detrimental to confinement. These tearing modes, even when reduced in improved confinement regimes of operation, significantly affect zonal flow activity and play a large role in setting microturbulent-induced transport levels. Previous gyrokinetic work has shown that a small but finite tearing fluctuation amplitude is necessary to produce transport values in agreement with experimental observation. This has previously been implemented via an ad-hoc, constant-in-time A∥ perturbation. This work details self-consistent modeling of tearing fluctuations in the RFP using the
NASA Astrophysics Data System (ADS)
Edera, Paolo; Bergamini, Davide; Trappe, Véronique; Giavazzi, Fabio; Cerbino, Roberto
2017-12-01
Particle-tracking microrheology (PT-μ r ) exploits the thermal motion of embedded particles to probe the local mechanical properties of soft materials. Despite its appealing conceptual simplicity, PT-μ r requires calibration procedures and operating assumptions that constitute a practical barrier to its wider application. Here we demonstrate differential dynamic microscopy microrheology (DDM-μ r ), a tracking-free approach based on the multiscale, temporal correlation study of the image intensity fluctuations that are observed in microscopy experiments as a consequence of the translational and rotational motion of the tracers. We show that the mechanical moduli of an arbitrary sample are determined correctly over a wide frequency range provided that the standard DDM analysis is reinforced with an iterative, self-consistent procedure that fully exploits the multiscale information made available by DDM. Our approach to DDM-μ r does not require any prior calibration, is in agreement with both traditional rheology and diffusing wave spectroscopy microrheology, and works in conditions where PT-μ r fails, providing thus an operationally simple, calibration-free probe of soft materials.
High-temperature catalyst for catalytic combustion and decomposition
NASA Technical Reports Server (NTRS)
Mays, Jeffrey A. (Inventor); Lohner, Kevin A. (Inventor); Sevener, Kathleen M. (Inventor); Jensen, Jeff J. (Inventor)
2005-01-01
A robust, high temperature mixed metal oxide catalyst for propellant composition, including high concentration hydrogen peroxide, and catalytic combustion, including methane air mixtures. The uses include target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The catalyst system requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. Start-up transients of less than 1 second have been demonstrated with catalyst bed and propellant temperatures as low as 50 degrees Fahrenheit. The catalyst system has consistently demonstrated high decomposition effeciency, extremely low decomposition roughness, and long operating life on multiple test particles.
Nuclear driven water decomposition plant for hydrogen production
NASA Technical Reports Server (NTRS)
Parker, G. H.; Brecher, L. E.; Farbman, G. H.
1976-01-01
The conceptual design of a hydrogen production plant using a very-high-temperature nuclear reactor (VHTR) to energize a hybrid electrolytic-thermochemical system for water decomposition has been prepared. A graphite-moderated helium-cooled VHTR is used to produce 1850 F gas for electric power generation and 1600 F process heat for the water-decomposition process which uses sulfur compounds and promises performance superior to normal water electrolysis or other published thermochemical processes. The combined cycle operates at an overall thermal efficiency in excess of 45%, and the overall economics of hydrogen production by this plant have been evaluated predicated on a consistent set of economic ground rules. The conceptual design and evaluation efforts have indicated that development of this type of nuclear-driven water-decomposition plant will permit large-scale economic generation of hydrogen in the 1990s.
A multiscale modeling approach to inflammation: A case study in human endotoxemia
NASA Astrophysics Data System (ADS)
Scheff, Jeremy D.; Mavroudis, Panteleimon D.; Foteinou, Panagiota T.; An, Gary; Calvano, Steve E.; Doyle, John; Dick, Thomas E.; Lowry, Stephen F.; Vodovotz, Yoram; Androulakis, Ioannis P.
2013-07-01
Inflammation is a critical component in the body's response to injury. A dysregulated inflammatory response, in which either the injury is not repaired or the inflammatory response does not appropriately self-regulate and end, is associated with a wide range of inflammatory diseases such as sepsis. Clinical management of sepsis is a significant problem, but progress in this area has been slow. This may be due to the inherent nonlinearities and complexities in the interacting multiscale pathways that are activated in response to systemic inflammation, motivating the application of systems biology techniques to better understand the inflammatory response. Here, we review our past work on a multiscale modeling approach applied to human endotoxemia, a model of systemic inflammation, consisting of a system of compartmentalized differential equations operating at different time scales and through a discrete model linking inflammatory mediators with changing patterns in the beating of the heart, which has been correlated with outcome and severity of inflammatory disease despite unclear mechanistic underpinnings. Working towards unraveling the relationship between inflammation and heart rate variability (HRV) may enable greater understanding of clinical observations as well as novel therapeutic targets.
Multiscale modelling for tokamak pedestals
NASA Astrophysics Data System (ADS)
Abel, I. G.
2018-04-01
Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.
Multiscale Cloud System Modeling
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell W.
2009-01-01
The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.
Design and control of a multi-DOF micromanipulator dedicated to multiscale micromanipulation
NASA Astrophysics Data System (ADS)
Yang, Yi-Ling; Wei, Yan-Ding; Lou, Jun-Qiang; Fu, Lei; Fang, Sheng
2017-11-01
This paper presents the design, implementation and control of a new piezoelectrically actuated compliant micromanipulator dedicated to multiscale, precision and reliable operations. To begin with, the manipulator is devised to obtain multi degrees of freedom and large workspace ranges. Two-stage amplification mechanisms (consists of the leverage and the rocker mechanisms) and composite parallelogram mechanisms are combined to construct the lower microstage. Meanwhile, the structure design of the upper dual-driven microgripper is based on the bridge-type mechanism and the unilateral parallelogram mechanism. Through finite-element analysis, the structural parameters of the micromanipulator are optimized and the structural interaction performances are examined. Moreover, a cooperative control strategy is proposed to achieve the synchronous control of the motion trajectory, the gripper position and the contact force. Precision motion control in terms of the hysteresis phenomenon and system disturbances is ensured by using an adaptive sliding mode control (SMC). In particular, an improved nonsymmetrical Bouc-Wen model and a fuzzy regulator are proposed in the SMC. Several experimental investigations are conducted to validate the effectiveness of the developed micromanipulator by performing transferring operations of a micro-object. Experimental results demonstrate that the micromanipulator presents good characteristics, and precision and robust operation can be acquired using the cooperative controller.
NASA Astrophysics Data System (ADS)
Leclercq, Sylvain; Lidbury, David; Van Dyck, Steven; Moinereau, Dominique; Alamo, Ana; Mazouzi, Abdou Al
2010-11-01
In nuclear power plants, materials may undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities that operate these reactors need to quantify the ageing and the potential degradations of some essential structures of the power plant to ensure safe and reliable plant operation. So far, the material databases needed to take account of these degradations in the design and safe operation of installations mainly rely on long-term irradiation programs in test reactors as well as on mechanical or corrosion testing in specialized hot cells. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences have now made possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. A first step towards this goal has been successfully reached through the development of the RPV-2 and Toughness Module numerical tools by the scientific community created around the FP6 PERFECT project. These tools allow to simulate irradiation effects on the constitutive behaviour of the reactor pressure vessel low alloy steel, and also on its failure properties. Relying on the existing PERFECT Roadmap, the 4 years Collaborative Project PERFORM 60 has mainly for objective to develop multi-scale tools aimed at predicting the combined effects of irradiation and corrosion on internals (austenitic stainless steels) and also to improve existing ones on RPV (bainitic steels). PERFORM 60 is based on two technical sub-projects: (i) RPV and (ii) internals. In addition to these technical sub-projects, the Users' Group and Training sub-project shall allow representatives of constructors, utilities, research organizations… from Europe, USA and Japan to receive the information and training to get their own appraisal on limits and potentialities of the developed tools. An important effort will also be made to teach young researchers in the field of materials' degradation. PERFORM 60 has officially started on March 1st, 2009 with 20 European organizations and Universities involved in the nuclear field.
Task Decomposition Model for Dispatchers in Dynamic Scheduling of Demand Responsive Transit Systems
DOT National Transportation Integrated Search
2000-06-01
Since the passage of ADA, the demand for paratransit service is steadily increasing. Paratransit companies are relying on computer automation to streamline dispatch operations, increase productivity and reduce operator stress and error. Little resear...
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Quantitative analysis of microbial biomass yield in aerobic bioreactor.
Watanabe, Osamu; Isoda, Satoru
2013-12-01
We have studied the integrated model of reaction rate equations with thermal energy balance in aerobic bioreactor for food waste decomposition and showed that the integrated model has the capability both of monitoring microbial activity in real time and of analyzing biodegradation kinetics and thermal-hydrodynamic properties. On the other hand, concerning microbial metabolism, it was known that balancing catabolic reactions with anabolic reactions in terms of energy and electron flow provides stoichiometric metabolic reactions and enables the estimation of microbial biomass yield (stoichiometric reaction model). We have studied a method for estimating real-time microbial biomass yield in the bioreactor during food waste decomposition by combining the integrated model with the stoichiometric reaction model. As a result, it was found that the time course of microbial biomass yield in the bioreactor during decomposition can be evaluated using the operational data of the bioreactor (weight of input food waste and bed temperature) by the combined model. The combined model can be applied to manage a food waste decomposition not only for controlling system operation to keep microbial activity stable, but also for producing value-added products such as compost on optimum condition. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, Hendrik F.
The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.
NASA Astrophysics Data System (ADS)
Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry
2015-04-01
Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.
NASA Astrophysics Data System (ADS)
Abedi, S.; Mashhadian, M.; Noshadravan, A.
2015-12-01
Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images.
Salvi, Massimo; Molinari, Filippo
2018-06-20
Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.
Effective surface and boundary conditions for heterogeneous surfaces with mixed boundary conditions
NASA Astrophysics Data System (ADS)
Guo, Jianwei; Veran-Tissoires, Stéphanie; Quintard, Michel
2016-01-01
To deal with multi-scale problems involving transport from a heterogeneous and rough surface characterized by a mixed boundary condition, an effective surface theory is developed, which replaces the original surface by a homogeneous and smooth surface with specific boundary conditions. A typical example corresponds to a laminar flow over a soluble salt medium which contains insoluble material. To develop the concept of effective surface, a multi-domain decomposition approach is applied. In this framework, velocity and concentration at micro-scale are estimated with an asymptotic expansion of deviation terms with respect to macro-scale velocity and concentration fields. Closure problems for the deviations are obtained and used to define the effective surface position and the related boundary conditions. The evolution of some effective properties and the impact of surface geometry, Péclet, Schmidt and Damköhler numbers are investigated. Finally, comparisons are made between the numerical results obtained with the effective models and those from direct numerical simulations with the original rough surface, for two kinds of configurations.
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
WAKES: Wavelet Adaptive Kinetic Evolution Solvers
NASA Astrophysics Data System (ADS)
Mardirian, Marine; Afeyan, Bedros; Larson, David
2016-10-01
We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.
Abiotic mechanism for the formation of atmospheric nitrous oxide from ammonium nitrate.
Rubasinghege, Gayan; Spak, Scott N; Stanier, Charles O; Carmichael, Gregory R; Grassian, Vicki H
2011-04-01
Nitrous oxide (N2O) is an important greenhouse gas and a primary cause of stratospheric ozone destruction. Despite its importance, there remain missing sources in the N2O budget. Here we report the formation of atmospheric nitrous oxide from the decomposition of ammonium nitrate via an abiotic mechanism that is favorable in the presence of light, relative humidity and a surface. This source of N2O is not currently accounted for in the global N2O budget. Annual production of N2O from atmospheric aerosols and surface fertilizer application over the continental United States from this abiotic pathway is estimated from results of an annual chemical transport simulation with the Community Multiscale Air Quality model (CMAQ). This pathway is projected to produce 9.3(+0.7/-5.3) Gg N2O annually over North America. N2O production by this mechanism is expected globally from both megacities and agricultural areas and may become more important under future projected changes in anthropogenic emissions.
Multiscale Analysis of Rapidly Rotating Dynamo Simulations
NASA Astrophysics Data System (ADS)
Orvedahl, Ryan; Calkins, Michael; Featherstone, Nicholas
2017-11-01
The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
Method for conducting electroless metal-plating processes
Petit, George S.; Wright, Ralph R.
1978-01-01
This invention is an improved method for conducting electroless metal-plating processes in a metal tank which is exposed to the plating bath. The invention solves a problem commonly encountered in such processes: how to determine when it is advisable to shutdown the process in order to clean and/or re-passivate the tank. The new method comprises contacting the bath with a current-conducting, non-catalytic probe and, during plating operations, monitoring the gradually changing difference in electropotential between the probe and tank. It has been found that the value of this voltage is indicative of the extent to which nickel-bearing decomposition products accumulate on the tank. By utilizing the voltage to determine when shutdown for cleaning is advisable, the operator can avoid premature shutdown and at the same time avoid prolonging operations to the point that spontaneous decomposition occurs.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong
2016-12-01
Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.
Randomized interpolative decomposition of separated representations
NASA Astrophysics Data System (ADS)
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
A Multi-Scale Perspective of the Effects of Forest Fragmentation on Birds in Eastern Forests
Frank R. Thompson; Therese M. Donovan; Richard M. DeGraff; John Faaborg; Scott K. Robinson
2002-01-01
We propose a model that considers forest fragmentation within a spatial hierarchy that includes regional or biogeographic effects, landscape-level fragmentation effects, and local habitat effects. We hypothesize that effects operate "top down" in that larger scale effects provide constraints or context for smaller scale effects. Bird species' abundance...
Can a biologist fix a smartphone?-Just hack it!
Kamoun, Sophien
2017-05-08
Biological systems integrate multiscale processes and networks and are, therefore, viewed as difficult to dissect. However, because of the clear-cut separation between the software code (the information encoded in the genome sequence) and hardware (organism), genome editors can operate as software engineers to hack biological systems without any particularly deep understanding of the complexity of the systems.
The CMAQ modeling system has been used to simulate the air quality for North America and Europe for the entire year of 2006 as part of the Air Quality Model Evaluation International Initiative (AQMEII) and the operational model performance of O3, fine particulate matte...
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
M&S Decision/Role-Behavior Decompositions
2007-10-17
M &S Decision/Role-Behavior Decompositions Wargaming and Analysis Workshop Military Operations Research Society 17 October 2007 Paul Works, Methods...number. 1. REPORT DATE 17 OCT 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE M &S Decision/Role-Behavior...transmission. • Combat models and simulations ( M &S) continue, in most cases, to model “effects-level” representations of SA, decisions, and behaviors. – M &S
Community Multiscale Air Quality Model
The U.S. EPA developed the Community Multiscale Air Quality (CMAQ) system to apply a “one atmosphere” multiscale and multi-pollutant modeling approach based mainly on the “first principles” description of the atmosphere. The multiscale capability is supported by the governing di...
Design and Assessment of an Associate Degree-Level Plant Operations Technical Education Program
NASA Astrophysics Data System (ADS)
Selwitz, Jason Lawrence
Research was undertaken to develop and evaluate an associate degree-level technical education program in Plant Operations oriented towards training students in applied science, technology, engineering, and mathematics (STEM) skills and knowledge relevant to a spectrum of processing industries. This work focuses on four aspects of the curriculum and course development and evaluation research. First, the context of, and impetus for, what was formerly called vocational education, now referred to as technical or workforce education, is provided. Second, the research that was undertaken to design and evaluate an associate degree-level STEM workforce education program is described. Third, the adaptation of a student self-assessment of learning gains instrument is reviewed, and an analysis of the resulting data using an adapted logic model is provided, to evaluate the extent to which instructional approaches, in two process control/improvement-focused courses, were effective in meeting course-level intended learning outcomes. Finally, eight integrative multiscale exercises were designed from two example process systems, wastewater treatment and fast pyrolysis. The integrative exercises are intended for use as tools to accelerate the formation of an operator-technician's multiscale vision of systems, unit operations, underlying processes, and fundamental reactions relevant to multiple industries. Community and technical colleges serve a vital function in STEM education by training workers for medium- and high-skilled technical careers and providing employers the labor necessary to operate and maintain thriving business ventures. Through development of the curricular, course, and assessment-related instruments and tools, this research helps ensure associate degree-level technical education programs can engage in a continual process of program evaluation and improvement.
Sensors, nano-electronics and photonics for the Army of 2030 and beyond
NASA Astrophysics Data System (ADS)
Perconti, Philip; Alberts, W. C. K.; Bajaj, Jagmohan; Schuster, Jonathan; Reed, Meredith
2016-02-01
The US Army's future operating concept will rely heavily on sensors, nano-electronics and photonics technologies to rapidly develop situational understanding in challenging and complex environments. Recent technology breakthroughs in integrated 3D multiscale semiconductor modeling (from atoms-to-sensors), combined with ARL's Open Campus business model for collaborative research provide a unique opportunity to accelerate the adoption of new technology for reduced size, weight, power, and cost of Army equipment. This paper presents recent research efforts on multi-scale modeling at the US Army Research Laboratory (ARL) and proposes the establishment of a modeling consortium or center for semiconductor materials modeling. ARL's proposed Center for Semiconductor Materials Modeling brings together government, academia, and industry in a collaborative fashion to continuously push semiconductor research forward for the mutual benefit of all Army partners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Xiaojuan, E-mail: xjlian2005@gmail.com; Cartoixà, Xavier; Miranda, Enrique
2014-06-28
We depart from first-principle simulations of electron transport along paths of oxygen vacancies in HfO{sub 2} to reformulate the Quantum Point Contact (QPC) model in terms of a bundle of such vacancy paths. By doing this, the number of model parameters is reduced and a much clearer link between the microscopic structure of the conductive filament (CF) and its electrical properties can be provided. The new multi-scale QPC model is applied to two different HfO{sub 2}-based devices operated in the unipolar and bipolar resistive switching (RS) modes. Extraction of the QPC model parameters from a statistically significant number of CFsmore » allows revealing significant structural differences in the CF of these two types of devices and RS modes.« less
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
From seconds to months: an overview of multi-scale dynamics of mobile telephone calls
NASA Astrophysics Data System (ADS)
Saramäki, Jari; Moro, Esteban
2015-06-01
Big Data on electronic records of social interactions allow approaching human behaviour and sociality from a quantitative point of view with unforeseen statistical power. Mobile telephone Call Detail Records (CDRs), automatically collected by telecom operators for billing purposes, have proven especially fruitful for understanding one-to-one communication patterns as well as the dynamics of social networks that are reflected in such patterns. We present an overview of empirical results on the multi-scale dynamics of social dynamics and networks inferred from mobile telephone calls. We begin with the shortest timescales and fastest dynamics, such as burstiness of call sequences between individuals, and "zoom out" towards longer temporal and larger structural scales, from temporal motifs formed by correlated calls between multiple individuals to long-term dynamics of social groups. We conclude this overview with a future outlook.
Multiscale Multiphysics Developments for Accident Tolerant Fuel Concepts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, K. A.; Hales, J. D.; Yu, J.
2015-09-01
U 3Si 2 and iron-chromium-aluminum (Fe-Cr-Al) alloys are two of many proposed accident-tolerant fuel concepts for the fuel and cladding, respectively. The behavior of these materials under normal operating and accident reactor conditions is not well known. As part of the Department of Energy’s Accident Tolerant Fuel High Impact Problem program significant work has been conducted to investigate the U 3Si 2 and FeCrAl behavior under reactor conditions. This report presents the multiscale and multiphysics effort completed in fiscal year 2015. The report is split into four major categories including Density Functional Theory Developments, Molecular Dynamics Developments, Mesoscale Developments, andmore » Engineering Scale Developments. The work shown here is a compilation of a collaborative effort between Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory and Anatech Corp.« less
Kim, Il Kwang; Lee, Soo Il
2016-05-01
The modal decomposition of tapping mode atomic force microscopy microcantilevers in liquid environments was studied experimentally. Microcantilevers with different lengths and stiffnesses and two sample surfaces with different elastic moduli were used in the experiment. The response modes of the microcantilevers were extracted as proper orthogonal modes through proper orthogonal decomposition. Smooth orthogonal decomposition was used to estimate the resonance frequency directly. The effects of the tapping setpoint and the elastic modulus of the sample under test were examined in terms of their multi-mode responses with proper orthogonal modes, proper orthogonal values, smooth orthogonal modes and smooth orthogonal values. Regardless of the stiffness of the microcantilever under test, the first mode was dominant in tapping mode atomic force microscopy under normal operating conditions. However, at lower tapping setpoints, the flexible microcantilever showed modal distortion and noise near the tip when tapping on a hard sample. The stiff microcantilever had a higher mode effect on a soft sample at lower tapping setpoints. Modal decomposition for tapping mode atomic force microscopy can thus be used to estimate the characteristics of samples in liquid environments.
Barndõk, Helen; Cortijo, Luis; Hermosilla, Daphne; Negro, Carlos; Blanco, Angeles
2014-09-15
This paper denotes the importance of operational parameters for the feasibility of ozone (O3) oxidation for the treatment of wastewaters containing 1,4-dioxane. Results show that O3 process, which has formerly been considered insufficient as a sole treatment for such wastewaters, could be a viable treatment for the degradation of 1,4-dioxane at the adequate operation conditions. The treatment of both synthetic solution of 1,4-dioxane and industrial wastewaters, containing 1,4-dioxane and 2-methyl-1,3-dioxolane (MDO), showed that about 90% of chemical oxygen demand can be removed and almost a total removal of 1,4-dioxane and MDO is reached by O3 at optimal process conditions. Data from on-line Fourier transform infrared spectroscopy provides a good insight to its different decomposition routes that eventually determine the viability of degrading this toxic and hazardous compound from industrial waters. The degradation at pH>9 occurs faster through the formation of ethylene glycol as a primary intermediate; whereas the decomposition in acidic conditions (pH<5.7) consists in the formation and slower degradation of ethylene glycol diformate. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chightai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Agarwal, Prachi; Kuriakose, Jean W.; Kazerooni, Ella A.
2013-03-01
Automatic tracking and segmentation of the coronary arterial tree is the basic step for computer-aided analysis of coronary disease. The goal of this study is to develop an automated method to identify the origins of the left coronary artery (LCA) and right coronary artery (RCA) as the seed points for the tracking of the coronary arterial trees. The heart region and the contrast-filled structures in the heart region are first extracted using morphological operations and EM estimation. To identify the ascending aorta, we developed a new multiscale aorta search method (MAS) method in which the aorta is identified based on a-priori knowledge of its circular shape. Because the shape of the ascending aorta in the cCTA axial view is roughly a circle but its size can vary over a wide range for different patients, multiscale circularshape priors are used to search for the best matching circular object in each CT slice, guided by the Hausdorff distance (HD) as the matching indicator. The location of the aorta is identified by finding the minimum HD in the heart region over the set of multiscale circular priors. An adaptive region growing method is then used to extend the above initially identified aorta down to the aortic valves. The origins at the aortic sinus are finally identified by a morphological gray level top-hat operation applied to the region-grown aorta with morphological structuring element designed for coronary arteries. For the 40 test cases, the aorta was correctly identified in 38 cases (95%). The aorta can be grown to the aortic root in 36 cases, and 36 LCA origins and 34 RCA origins can be identified within 10 mm of the locations marked by radiologists.
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Regulski, Krzysztof
2016-08-01
We present a process of semantic meta-model development for data management in an adaptable multiscale modeling framework. The main problems in ontology design are discussed, and a solution achieved as a result of the research is presented. The main concepts concerning the application and data management background for multiscale modeling were derived from the AM3 approach—object-oriented Agile multiscale modeling methodology. The ontological description of multiscale models enables validation of semantic correctness of data interchange between submodels. We also present a possibility of using the ontological model as a supervisor in conjunction with a multiscale model controller and a knowledge base system. Multiscale modeling formal ontology (MMFO), designed for describing multiscale models' data and structures, is presented. A need for applying meta-ontology in the MMFO development process is discussed. Examples of MMFO application in describing thermo-mechanical treatment of metal alloys are discussed. Present and future applications of MMFO are described.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem
2017-01-01
Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
NASA Astrophysics Data System (ADS)
Zhao, Wei; Marchand, Roger; Fu, Qiang
2017-12-01
Long-term reflectivity data collected by a millimeter cloud radar at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site are used to examine the diurnal cycle of clouds and precipitation and are compared with the diurnal cycle simulated by a Multiscale Modeling Framework (MMF) climate model. The study uses a set of atmospheric states that were created specifically for the SGP and for the purpose of investigating under what synoptic conditions models compare well with observations on a statistical basis (rather than using case studies or seasonal or longer time scale averaging). Differences in the annual mean diurnal cycle between observations and the MMF are decomposed into differences due to the relative frequency of states, the daily mean vertical profile of hydrometeor occurrence, and the (normalized) diurnal variation of hydrometeors in each state. Here the hydrometeors are classified as cloud or precipitation based solely on the reflectivity observed by a millimeter radar or generated by a radar simulator. The results show that the MMF does not capture the diurnal variation of low clouds well in any of the states but does a reasonable job capturing the diurnal variations of high clouds and precipitation in some states. In particular, the diurnal variations in states that occur during summer are reasonably captured by the MMF, while the diurnal variations in states that occur during the transition seasons (spring and fall) are not well captured. Overall, the errors in the annual composite are due primarily to errors in the daily mean of hydrometeor occurrence (rather than diurnal variations), but errors in the state frequency (that is, the distribution of weather states in the model) also play a significant role.
NASA Astrophysics Data System (ADS)
Wu, Y.; Shen, B. W.; Cheung, S.
2016-12-01
Recent advance in high-resolution global hurricane simulations and visualizations have collectively suggested the importance of both downscaling and upscaling processes in the formation and intensification of TCs. To reveal multiscale processes from massive volume of global data for multiple years, a scalable Parallel Ensemble Empirical Mode Decomposition (PEEMD) method has been developed for the analysis. In this study, the PEEMD is applied to analyzing 10-year (2004-2013) ERA-Interim global 0.750 resolution reanalysis data to explore the role of the downscaling processes in tropical cyclogenesis associated with African Easterly Waves (AEWs). Using the PEEMD, raw data are decomposed into oscillatory Intrinsic Function Modes (IMFs) that represent atmospheric systems of the various length scales and the trend mode that represents a non-oscillatory large scale environmental flow. Among oscillatory modes, results suggest that the third oscillatory mode (IMF3) is statistically correlated with the TC/AEW scale systems. Therefore, IMF3 and trend mode are analyzed in details. Our 10-year analysis shows that more than 50% of the AEW associated hurricanes reveal the association of storms' formation with the significant downscaling shear transfer from the larger-scale trend mode to the smaller scale IMF3. Future work will apply the PEEMD to the analysis of higher-resolution datasets to explore the role of the upscaling processes provided by the convection (or TC) in the development of the TC (or AEW). Figure caption: The tendency for horizontal wind shear for the total winds (black line), IMF3 (blue line), and trend mode (red line) and SLP (black dotted line) along the storm track of Helene (2006).
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.
Unitary Operators on the Document Space.
ERIC Educational Resources Information Center
Hoenkamp, Eduard
2003-01-01
Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)
The Outer Loop bioreactor: a case study of settlement monitoring and solids decomposition.
Abichou, Tarek; Barlaz, Morton A; Green, Roger; Hater, Gary
2013-10-01
The Outer Loop landfill bioreactor (OLLB) located in Louisville, KY, USA has been in operation since 2000 and represents an opportunity to evaluate long-term bioreactor monitoring data at a full-scale operational landfill. Three types of landfill units were studied including a Control cell, a new landfill area that had a piping network installed as waste was being placed to support leachate recirculation (As-Built cell), and a conventional landfill that was modified to allow for liquid recirculation (Retrofit cell). The objective of this study is to summarize the results of settlement data and assess how these data relate to solids decomposition monitoring at the OLLB. The Retrofit cells started to settle as soon as liquids were introduced. The cumulative settlement during the 8years of monitoring varied from 60 to 100cm. These results suggest that liquid recirculation in the Retrofit cells caused a 5-8% reduction in the thickness of the waste column. The average long-term settlement in the As-Built and Control Cells was about 37% and 19%, respectively. The modified compression index (Cα(')) was 0.17 for the Control cells and 0.2-0.48 for the As-Built cells. While the As-Built cells exhibited greater settlement than the Control cells, the data do not support biodegradation as the only explanation. The increased settlement in the As-Built bioreactor cell appeared to be associated with liquid movement and not with biodegradation because both chemical (biochemical methane potential) and physical (moisture content) indicators of decomposition were similar in the Control and As-Built cells. The solids data are consistent with the concept that bioreactor operations accelerate the rate of decomposition, but not necessarily the cumulative loss of anaerobically degradable solids. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evidence for morphological composition in compound words using MEG.
Brooks, Teon L; Cid de Garcia, Daniela
2015-01-01
Psycholinguistic and electrophysiological studies of lexical processing show convergent evidence for morpheme-based lexical access for morphologically complex words that involves early decomposition into their constituent morphemes followed by some combinatorial operation. Considering that both semantically transparent (e.g., sailboat) and semantically opaque (e.g., bootleg) compounds undergo morphological decomposition during the earlier stages of lexical processing, subsequent combinatorial operations should account for the difference in the contribution of the constituent morphemes to the meaning of these different word types. In this study we use magnetoencephalography (MEG) to pinpoint the neural bases of this combinatorial stage in English compound word recognition. MEG data were acquired while participants performed a word naming task in which three word types, transparent compounds (e.g., roadside), opaque compounds (e.g., butterfly), and morphologically simple words (e.g., brothel) were contrasted in a partial-repetition priming paradigm where the word of interest was primed by one of its constituent morphemes. Analysis of onset latency revealed shorter latencies to name compound words than simplex words when primed, further supporting a stage of morphological decomposition in lexical access. An analysis of the associated MEG activity uncovered a region of interest implicated in morphological composition, the Left Anterior Temporal Lobe (LATL). Only transparent compounds showed increased activity in this area from 250 to 470 ms. Previous studies using sentences and phrases have highlighted the role of LATL in performing computations for basic combinatorial operations. Results are in tune with decomposition models for morpheme accessibility early in processing and suggest that semantics play a role in combining the meanings of morphemes when their composition is transparent to the overall word meaning.
Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines
NASA Astrophysics Data System (ADS)
Govindaraju, Parithi
A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.
An analysis of bipropellant neutralization for spacecraft refueling operations
NASA Technical Reports Server (NTRS)
Kauffman, David
1987-01-01
Refueling of satellites on orbit with storable propellants will involve venting part or all of the pressurant gas from the propellant tanks. This gas will be saturated with propellant vapor, and it may also have significant amounts of entrained fine droplets of propellant. The two most commonly used bipropellants, monomethyl hydrazine (MMH) and nitrogen tetroxide (N2O4), are highly reactive and toxic. Various possible ways of neutralizing the vented propellants are examined. The amount of propellant vented in a typical refueling operation is shown to be in the range of 0.2 to 5% of the tank capacity. Four potential neutralization schemes are examined: chemical decomposition, chemical reaction, condensation and adsorption. Chemical decomposition to essentially inert materials is thermodynamically feasible for both MMH and N2O4. It would be the simplest and easiest neutralization method to implement. Chemical decomposition would require more complex control. Condensation would require a refrigeration system and a very efficent phase separator. Adsorption is likely to be much heavier. A preliminary assessment of the four neutralization shemes is presented, along with suggested research and development plans.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
Experimental detection and focusing in shallow water by decomposition of the time reversal operator.
Prada, Claire; de Rosny, Julien; Clorennec, Dominique; Minonzio, Jean-Gabriel; Aubry, Alexandre; Fink, Mathias; Berniere, Lothar; Billand, Philippe; Hibral, Sidonie; Folegot, Thomas
2007-08-01
A rigid 24-element source-receiver array in the 10-15 kHz frequency band, connected to a programmable electronic system, was deployed in the Bay of Brest during spring 2005. In this 10- to 18-m-deep environment, backscattered data from submerged targets were recorded. Successful detection and focusing experiments in very shallow water using the decomposition of the time reversal operator (DORT method) are shown. The ability of the DORT method to separate the echo of a target from reverberation as well as the echo from two different targets at 250 m is shown. An example of active focusing within the waveguide using the first invariant of the time reversal operator is presented, showing the enhanced focusing capability. Furthermore, the localization of the scatterers in the water column is obtained using a range-dependent acoustic model.
Multiscale sensorless adaptive optics OCT angiography system for in vivo human retinal imaging.
Ju, Myeong Jin; Heisler, Morgan; Wahl, Daniel; Jian, Yifan; Sarunic, Marinko V
2017-11-01
We present a multiscale sensorless adaptive optics (SAO) OCT system capable of imaging retinal structure and vasculature with various fields-of-view (FOV) and resolutions. Using a single deformable mirror and exploiting the polarization properties of light, the SAO-OCT-A was implemented in a compact and easy to operate system. With the ability to adjust the beam diameter at the pupil, retinal imaging was demonstrated at two different numerical apertures with the same system. The general morphological structure and retinal vasculature could be observed with a few tens of micrometer-scale lateral resolution with conventional OCT and OCT-A scanning protocols with a 1.7-mm-diameter beam incident at the pupil and a large FOV (15 deg× 15 deg). Changing the system to a higher numerical aperture with a 5.0-mm-diameter beam incident at the pupil and the SAO aberration correction, the FOV was reduced to 3 deg× 3 deg for fine detailed imaging of morphological structure and microvasculature such as the photoreceptor mosaic and capillaries. Multiscale functional SAO-OCT imaging was performed on four healthy subjects, demonstrating its functionality and potential for clinical utility. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
This synthetic, multi-scale approach will generate a sequence of spatially explicit maps that will provide science guidance to support strategic decision-making regarding the spatially-distributed risk of, and possible adaptation to, the spread of invasive species at local to ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkiewicz, O. J.; Wiaderek, Kamila M.; Chupas, Peter J.
Dynamic properties and multiscale complexities governing electrochemical energy storage in batteries are most ideally interrogated under simulated operating conditions within an electrochemical cell. We assess how electrochemical reactivity can be impacted by experiment design, including the X-ray measurements or by common features or adaptations of electrochemical cells that enable X-ray measurements.
The CMAQ modeling system has been used to simulate the CONUS using 12-km by 12-km horizontal grid spacing for the entire year of 2006 as part of the Air Quality Model Evaluation International initiative (AQMEII). The operational model performance for O3 and PM2.5<...
Admissible Diffusion Wavelets and Their Applications in Space-Frequency Processing.
Hou, Tingbo; Qin, Hong
2013-01-01
As signal processing tools, diffusion wavelets and biorthogonal diffusion wavelets have been propelled by recent research in mathematics. They employ diffusion as a smoothing and scaling process to empower multiscale analysis. However, their applications in graphics and visualization are overshadowed by nonadmissible wavelets and their expensive computation. In this paper, our motivation is to broaden the application scope to space-frequency processing of shape geometry and scalar fields. We propose the admissible diffusion wavelets (ADW) on meshed surfaces and point clouds. The ADW are constructed in a bottom-up manner that starts from a local operator in a high frequency, and dilates by its dyadic powers to low frequencies. By relieving the orthogonality and enforcing normalization, the wavelets are locally supported and admissible, hence facilitating data analysis and geometry processing. We define the novel rapid reconstruction, which recovers the signal from multiple bands of high frequencies and a low-frequency base in full resolution. It enables operations localized in both space and frequency by manipulating wavelet coefficients through space-frequency filters. This paper aims to build a common theoretic foundation for a host of applications, including saliency visualization, multiscale feature extraction, spectral geometry processing, etc.
Xavier, Joao B; De Kreuk, Merle K; Picioreanu, Cristian; Van Loosdrecht, Mark C M
2007-09-15
Aerobic granular sludge is a novel compact biological wastewater treatment technology for integrated removal of COD (chemical oxygen demand), nitrogen, and phosphate charges. We present here a multiscale model of aerobic granular sludge sequencing batch reactors (GSBR) describing the complex dynamics of populations and nutrient removal. The macro scale describes bulk concentrations and effluent composition in six solutes (oxygen, acetate, ammonium, nitrite, nitrate, and phosphate). A finer scale, the scale of one granule (1.1 mm of diameter), describes the two-dimensional spatial arrangement of four bacterial groups--heterotrophs, ammonium oxidizers, nitrite oxidizers, and phosphate accumulating organisms (PAO)--using individual based modeling (IbM) with species-specific kinetic models. The model for PAO includes three internal storage compounds: polyhydroxyalkanoates (PHA), poly phosphate, and glycogen. Simulations of long-term reactor operation show how the microbial population and activity depends on the operating conditions. Short-term dynamics of solute bulk concentrations are also generated with results comparable to experimental data from lab scale reactors. Our results suggest that N-removal in GSBR occurs mostly via alternating nitrification/denitrification rather than simultaneous nitrification/denitrification, supporting an alternative strategy to improve N-removal in this promising wastewater treatment process.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
In Situ Solid-Gas Reactivity of Nanoscaled Metal Borides from Molten Salt Synthesis.
Gouget, Guillaume; Debecker, Damien P; Kim, Ara; Olivieri, Giorgia; Gallet, Jean-Jacques; Bournel, Fabrice; Thomas, Cyril; Ersen, Ovidiu; Moldovan, Simona; Sanchez, Clément; Carenco, Sophie; Portehault, David
2017-08-07
Metal borides have mostly been studied as bulk materials. The nanoscale provides new opportunities to investigate the properties of these materials, e.g., nanoscale hardening and surface reactivity. Metal borides are often considered stable solids because of their covalent character, but little is known on their behavior under a reactive atmosphere, especially reductive gases. We use molten salt synthesis at 750 °C to provide cobalt monoboride (CoB) nanocrystals embedded in an amorphous layer of cobalt(II) and partially oxidized boron as a model platform to study morphological, chemical, and structural evolutions of the boride and the superficial layer exposed to argon, dihydrogen (H 2 ), and a mixture of H 2 and carbon dioxide (CO 2 ) through a multiscale in situ approach: environmental transmission electron microscopy, synchrotron-based near-ambient-pressure X-ray photoelectron spectroscopy, and near-edge X-ray absorption spectroscopy. Although the material is stable under argon, H 2 triggers at 400 °C decomposition of CoB, leading to cobalt(0) nanoparticles. We then show that H 2 activates CoB for the catalysis of CO 2 methanation. A similar decomposition process is also observed on NiB nanocrystals under oxidizing conditions at 300 °C. Our work highlights the instability under reactive atmospheres of nanocrystalline cobalt and nickel borides obtained from molten salt synthesis. Therefore, we question the general stability of metal borides with distinct compositions under such conditions. These results shed light on the actual species in metal boride catalysis and provide the framework for future applications of metal borides in their stability domains.
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
NASA Astrophysics Data System (ADS)
Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi
2018-06-01
The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
NASA Technical Reports Server (NTRS)
Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)
2004-01-01
A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.
Bae, Won-Gyu; Kim, Hong Nam; Kim, Doogon; Park, Suk-Hee; Jeong, Hoon Eui; Suh, Kahp-Yang
2014-02-01
Multiscale, hierarchically patterned surfaces, such as lotus leaves, butterfly wings, adhesion pads of gecko lizards are abundantly found in nature, where microstructures are usually used to strengthen the mechanical stability while nanostructures offer the main functionality, i.e., wettability, structural color, or dry adhesion. To emulate such hierarchical structures in nature, multiscale, multilevel patterning has been extensively utilized for the last few decades towards various applications ranging from wetting control, structural colors, to tissue scaffolds. In this review, we highlight recent advances in scalable multiscale patterning to bring about improved functions that can even surpass those found in nature, with particular focus on the analogy between natural and synthetic architectures in terms of the role of different length scales. This review is organized into four sections. First, the role and importance of multiscale, hierarchical structures is described with four representative examples. Second, recent achievements in multiscale patterning are introduced with their strengths and weaknesses. Third, four application areas of wetting control, dry adhesives, selectively filtrating membranes, and multiscale tissue scaffolds are overviewed by stressing out how and why multiscale structures need to be incorporated to carry out their performances. Finally, we present future directions and challenges for scalable, multiscale patterned surfaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chun, Ho-Hwan; Jo, Wan-Kuen
2016-05-01
In this study, a N-, C-, and S-doped titania (NCS-TiO2) composite was prepared by combining the titanium precursor with a single dopant source, and the photocatalytic activity of this system for the decomposition of volatile organic compounds (VOCs) at indoor-concentration levels, under exposure to visible light, was examined. The NCS-TiO2 composite and the pure TiO2 photocatalyst, used as a reference, were characterized via X-ray diffraction, scanning electron microscopy, ultraviolet-visible diffuse reflectance spectroscopy, X-ray photoelectron spectroscopy, and Fourier transform infrared spectroscopy. The average efficiencies of benzene, toluene, ethyl benzene, and o-xylene decomposition using NCS-TiO2 for were 70, 87, -100, and -100%, respectively, whereas the values obtained using the pure TiO2 powder were -0, 18, 49, and 51%, respectively. These results suggested that, for the photocatalytic decomposition of toxic VOCs under visible-light exposure conditions, NCS-TiO2 was superior to the reference photocatalyst. The decomposition efficiencies of the target VOCs were inversely related to the initial concentration and relative humidity as well as to the air-flow rate. The decomposition efficiencies of the target chemicals achieved with a conventional lamp/NCS-TiO2 system were higher than those achieved with a light emitting diode/NCS-TiO2 system. Overall, NCS-TiO2 can be used for the efficient decomposition of VOCs under visible-light exposure, if the operational conditions are optimized.
Pan, Kuan Lun; Chen, Mei Chung; Yu, Sheng Jen; Yan, Shaw Yi; Chang, Moo Been
2016-06-01
Direct decompositions of nitric oxide (NO) by La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4 are experimentally investigated, and the catalysts are tested with different operating parameters to evaluate their activities. Experimental results indicate that the physical and chemical properties of La0.7Ce0.3SrNiO4 are significantly improved by doping with Ba and partial substitution with Pr. NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 are 32% and 68%, respectively, at 400 °C with He as carrier gas. As the temperature is increased to 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, reach 100% with the inlet NO concentration of 1000 ppm while the space velocity is fixed at 8000 hr(-1). Effects of O2, H2O(g), and CO2 contents and space velocity on NO decomposition are also explored. The results indicate that NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, are slightly reduced as space velocity is increased from 8000 to 20,000 hr(-1) at 500 °C. In addition, the activities of both catalysts (La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4) for NO decomposition are slightly reduced in the presence of 5% O2, 5% CO2, or 5% H2O(g). For durability test, with the space velocity of 8000 hr(-1) and operating temperature of 600 °C, high N2 yield is maintained throughout the durability test of 60 hr, revealing the long-term stability of Pr0.4Ba0.4Ce0.2SrNiO4 for NO decomposition. Overall, Pr0.4Ba0.4Ce0.2SrNiO4 shows good catalytic activity for NO decomposition. Nitrous oxide (NO) not only causes adverse environmental effects such as acid rain, photochemical smog, and deterioration of visibility and water quality, but also harms human lungs and respiratory system. Pervoskite-type catalysts, including La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4, are applied for direct NO decomposition. The results show that NO decomposition can be enhanced as La0.7Ce0.3SrNiO4 is substituted with Ba and/or Pr. At 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 reach 100%, demonstrating high activity and good potential for direct NO decomposition. Effects of O2, H2O(g), and CO2 contents on catalytic activities are also evaluated and discussed.
Using multiscale texture and density features for near-term breast cancer risk analysis
Sun, Wenqing; Tseng, Tzu-Liang (Bill); Qian, Wei; Zhang, Jianying; Saltzstein, Edward C.; Zheng, Bin; Lure, Fleming; Yu, Hui; Zhou, Shi
2015-01-01
Purpose: To help improve efficacy of screening mammography by eventually establishing a new optimal personalized screening paradigm, the authors investigated the potential of using the quantitative multiscale texture and density feature analysis of digital mammograms to predict near-term breast cancer risk. Methods: The authors’ dataset includes digital mammograms acquired from 340 women. Among them, 141 were positive and 199 were negative/benign cases. The negative digital mammograms acquired from the “prior” screening examinations were used in the study. Based on the intensity value distributions, five subregions at different scales were extracted from each mammogram. Five groups of features, including density and texture features, were developed and calculated on every one of the subregions. Sequential forward floating selection was used to search for the effective combinations. Using the selected features, a support vector machine (SVM) was optimized using a tenfold validation method to predict the risk of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) was used as the performance assessment index. Results: From a total number of 765 features computed from multiscale subregions, an optimal feature set of 12 features was selected. Applying this feature set, a SVM classifier yielded performance of AUC = 0.729 ± 0.021. The positive predictive value was 0.657 (92 of 140) and the negative predictive value was 0.755 (151 of 200). Conclusions: The study results demonstrated a moderately high positive association between risk prediction scores generated by the quantitative multiscale mammographic image feature analysis and the actual risk of a woman having an image-detectable breast cancer in the next subsequent examinations. PMID:26127038
NASA Astrophysics Data System (ADS)
Yoon, H.; Mook, W. M.; Dewers, T. A.
2017-12-01
Multiscale characteristics of textural and compositional (e.g., clay, cement, organics, etc.) heterogeneity profoundly influence the mechanical properties of shale. In particular, strongly anisotropic (i.e., laminated) heterogeneities are often observed to have a significant influence on hydrological and mechanical properties. In this work, we investigate a sample of the Cretaceous Mancos Shale to explore the importance of lamination, cements, organic content, and the spatial distribution of these characteristics. For compositional and structural characterization, the mineralogical distribution of thin core sample polished by ion-milling is analyzed using QEMSCAN® with MAPS MineralogyTM (developed by FEI Corporoation). Based on mineralogy and organic matter distribution, multi-scale nanoindentation testing was performed to directly link compositional heterogeneity to mechanical properties. With FIB-SEM (3D) and high-magnitude SEM (2D) images, key nanoindentation patterns are analyzed to evaluate elastic and plastic responses. Combined with MAPs Mineralogy data and fine-resolution BSE images, nanoindentation results are explained as a function of compositional and structural heterogeneity. Finite element modeling is used to quantitatively evaluate the link between the heterogeneity and mechanical behavior during nanoindentation. In addition, the spatial distribution of compositional heterogeneity, anisotropic bedding patterns, and mechanical anisotropy are employed as inputs for multiscale brittle fracture simulations using a phase field model. Comparison of experimental and numerical simulations reveal that proper incorporation of additional material information, such as bedding layer thickness and other geometrical attributes of the microstructures, may yield improvements on the numerical predictions of the mesoscale fracture patterns and hence the macroscopic effective toughness. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Husimi coordinates of multipartite separable states
NASA Astrophysics Data System (ADS)
Parfionov, Georges; Zapatrin, Romàn R.
2010-12-01
A parametrization of multipartite separable states in a finite-dimensional Hilbert space is suggested. It is proved to be a diffeomorphism between the set of zero-trace operators and the interior of the set of separable density operators. The result is applicable to any tensor product decomposition of the state space. An analytical criterion for separability of density operators is established in terms of the boundedness of a sequence of operators.
Model-Based Self-Tuning Multiscale Method for Combustion Control
NASA Technical Reports Server (NTRS)
Le, Dzu, K.; DeLaat, John C.; Chang, Clarence T.; Vrnak, Daniel R.
2006-01-01
A multi-scale representation of the combustor dynamics was used to create a self-tuning, scalable controller to suppress multiple instability modes in a liquid-fueled aero engine-derived combustor operating at engine-like conditions. Its self-tuning features designed to handle the uncertainties in the combustor dynamics and time-delays are essential for control performance and robustness. The controller was implemented to modulate a high-frequency fuel valve with feedback from dynamic pressure sensors. This scalable algorithm suppressed pressure oscillations of different instability modes by as much as 90 percent without the peak-splitting effect. The self-tuning logic guided the adjustment of controller parameters and converged quickly toward phase-lock for optimal suppression of the instabilities. The forced-response characteristics of the control model compare well with those of the test rig on both the frequency-domain and the time-domain.
NASA Technical Reports Server (NTRS)
Winternitz, Luke B.; Bamford, William A.; Price, Samuel R.
2017-01-01
As reported in a companion work, in its first phase, NASA's 2015 highly elliptic Magnetospheric Multiscale (MMS) mission set a record for the highest altitude operational use of on-board GPS-based navigation, returning state estimates at 12 Earth radii. In early 2017 MMS transitioned to its second phase which doubled the apogee distance to 25 Earth radii, approaching halfway to the Moon. This paper will present results for GPS observability and navigation performance achieved in MMS Phase 2. Additionally, it will provide simulation results predicting the performance of the MMS navigation system applied to a pair of concept missions at Lunar distances. These studies will demonstrate how high-sensitivity GPS (or GNSS) receivers paired with onboard navigation software, as in MMS-Navigation system, can extend the envelope of autonomous onboard GPS navigation far from the Earth.
Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Bukkuru, S.
2016-10-01
Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.
The Magnetospheric Multiscale Constellation
NASA Technical Reports Server (NTRS)
Tooley, C. R.; Black, R. K.; Robertson, B. P.; Stone, J. M.; Pope, S. E.; Davis, G. T.
2015-01-01
The Magnetospheric Multiscale (MMS) mission is the fourth mission of the Solar Terrestrial Probe (STP) program of the National Aeronautics and Space Administration (NASA). The MMS mission was launched on March 12, 2015. The MMS mission consists of four identically instrumented spin-stabilized observatories which are flown in formation to perform the first definitive study of magnetic reconnection in space. The MMS mission was presented with numerous technical challenges, including the simultaneous construction and launch of four identical large spacecraft with 100 instruments total, stringent electromagnetic cleanliness requirements, closed-loop precision maneuvering and pointing of spinning flexible spacecraft, on-board GPS based orbit determination far above the GPS constellation, and a flight dynamics design that enables formation flying with separation distances as small as 10 km. This paper describes the overall mission design and presents an overview of the design, testing, and early on-orbit operation of the spacecraft systems and instrument suite.
Multi-Scale Surface Descriptors
Cipriano, Gregory; Phillips, George N.; Gleicher, Michael
2010-01-01
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Fan, Yue
2002-01-01
By virtue of the technique of integration within an ordered product of operators and the Schmidt decomposition of the entangled state |η〉, we reduce the general projection calculation in the theory of quantum teleportation to a as simple as possible form and present a general formalism for teleportating quantum states of continuous variable. The project supported by National Natural Science Foundation of China and Educational Ministry Foundation of China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brezov, D. S.; Mladenova, C. D.; Mladenov, I. M., E-mail: mladenov@bio21.bas.bg
In this paper we obtain the Lie derivatives of the scalar parameters in the generalized Euler decomposition with respect to arbitrary axes under left and right deck transformations. This problem can be directly related to the representation of the angular momentum in quantum mechanics. As a particular example, we calculate the angular momentum and the corresponding quantum hamiltonian in the standard Euler and Bryan representations. Similarly, in the hyperbolic case, the Laplace-Beltrami operator is retrieved for the Iwasawa decomposition. The case of two axes is considered as well.
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2014-12-01
Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2011 CFR
2011-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
40 CFR 267.111 - What general standards must I meet when I stop operating the unit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... to protect human health and the environment, post-closure escape of hazardous waste, hazardous constituents, leachate, contaminated run-off, or hazardous waste decomposition products to the ground or... PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR OWNERS AND OPERATORS OF HAZARDOUS WASTE...
Atomistic Simulations of Chemical Reactivity of TATB Under Thermal and Shock Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manaa, M R; Reed, E J; Fried, L E
2009-09-23
The study of chemical transformations that occur at the reactive shock front of energetic materials provides important information for the development of predictive models at the grain-and continuum scales. A major shortcoming of current high explosives models is the lack of chemical kinetics data of the reacting explosive in the high pressure and temperature regimes. In the absence of experimental data, long-time scale atomistic molecular dynamics simulations with reactive chemistry become a viable recourse to provide an insight into the decomposition mechanism of explosives, and to obtain effective reaction rate laws. These rates can then be incorporated into thermo-chemical-hydro codesmore » (such as Cheetah linked to ALE3D) for accurate description of the grain and macro scales dynamics of reacting explosives. In this talk, I will present quantum simulations of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) crystals under thermal decomposition (high density and temperature) and shock compression conditions. This is the first time that condensed phase quantum methods have been used to study the chemistry of insensitive high explosives. We used the quantum-based, self-consistent charge density functional tight binding method (SCC{_}DFTB) to calculate the interatomic forces for reliable predictions of chemical reactions, and to examine electronic properties at detonation conditions for a relatively long time-scale on the order of several hundreds of picoseconds. For thermal decomposition of TATB, we conducted constant volume-temperature simulations, ranging from 0.35 to 2 nanoseconds, at {rho} = 2.87 g/cm{sup 3} at T = 3500, 3000, 2500, and 1500 K, and {rho} = 2.9 g/cm{sup 3} and 2.72 g/cm{sup 3}, at T = 3000 K. We also simulated crystal TATB's reactivity under steady overdriven shock compression using the multi-scale shock technique. We conducted shock simulations with specified shock speeds of 8, 9, and 10 km/s for up to 0.43 ns duration, enabling us to track the reactivity of TATB well into the formation of several stable gas products, such as H{sub 2}O, N{sub 2}, and CO{sub 2}. Although complex chemical transformations are occurring continuously in the dynamical, high temperature, reactive environment of our simulations, a simple overall scheme for the decomposition of TATB emerges: Water is the earliest decomposition products to form, followed by a polymerization (or condensation) process in which several TATB remaining fragments are joined together, initiating the early step in the formation of high-nitrogen clusters, along with stable products such as N{sub 2} and CO{sub 2}. Remarkably, these clusters with high concentration of carbon and nitrogen (and little oxygen) remain dynamically stable for the remaining period of the simulations. Our simulations, thus, reveal a hitherto unidentified region of high concentrations of nitrogen-rich heterocyclic clusters in reacting TATB, whose persistence impede further reactivity towards final products of fluid N{sub 2} and solid carbon. These simulations also predict significant populations of charged species such as NCO{sup -}, H{sup +}, OH{sup -}, H{sub 3}O{sup +}, and O{sup -2}, the first such observation in a reacting explosive. Finally, A reduced four steps, global reaction mechanism with Arrhenius kinetic rates for the decomposition of TATB, along with comparative Cheetah decomposition kinetics at various temperatures has been constructed and will be discussed.« less
Huang, Haiming; Xiao, Dean; Liu, Jiahui; Hou, Li; Ding, Li
2015-01-01
In the present study, struvite decomposition was performed by air stripping for ammonia release and a novel integrated reactor was designed for the simultaneous removal and recovery of total ammonia-nitrogen (TAN) and total orthophosphate (PT) from swine wastewater by internal struvite recycling. Decomposition of struvite by air stripping was found to be feasible. Without supplementation with additional magnesium and phosphate sources, the removal ratio of TAN from synthetic wastewater was maintained at >80% by recycling of the struvite decomposition product formed under optimal conditions, six times. Continuous operation of the integrated reactor indicated that approximately 91% TAN and 97% PT in the swine wastewater could be removed and recovered by the proposed recycling process with the supplementation of bittern. Economic evaluation of the proposed system showed that struvite precipitation cost can be saved by approximately 54% by adopting the proposed recycling process in comparison with no recycling method. PMID:25960246
A Domain Decomposition Parallelization of the Fast Marching Method
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.
Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.
Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
Decomposition of gas-phase trichloroethene by the UV/TiO2 process in the presence of ozone.
Shen, Y S; Ku, Y
2002-01-01
The decomposition of gas-phase trichloroethene (TCE) in air streams by direct photolysis, the UV/TiO2 and UV/O3 processes was studied. The experiments were carried out under various UV light intensities and wavelengths, ozone dosages, and initial concentrations of TCE to investigate and compare the removal efficiency of the pollutant. For UV/TiO2 process, the individual contribution to the decomposition of TCE by direct photolysis and hydroxyl radicals destruction was differentiated to discuss the quantum efficiency with 254 and 365 nm UV lamps. The removal of gaseous TCE was found to reduce by UV/TiO2 process in the presence of ozone possibly because of the ozone molecules could scavenge hydroxyl radicals produced from the excitation of TiO2 by UV radiation to inhibit the decomposition of TCE. A photoreactor design equation for the decomposition of gaseous TCE by the UV/TiO2 process in air streams was developed by combining the continuity equation of the pollutant and the surface catalysis reaction rate expression. By the proposed design scheme, the temporal distribution of TCE at various operation conditions by the UV/TiO2 process can be well modeled.
Multi-scale analysis and characterization of the ITER pre-compression rings
NASA Astrophysics Data System (ADS)
Foussat, A.; Park, B.; Rajainmaki, H.
2014-01-01
The toroidal field (TF) system of ITER Tokamak composed of 18 "D" shaped Toroidal Field (TF) coils during an operating scenario experiences out-of-plane forces caused by the interaction between the 68kA operating TF current and the poloidal magnetic fields. In order to keep the induced static and cyclic stress range in the intercoil shear keys between coils cases within the ITER allowable limits [1], centripetal preload is introduced by means of S2 fiber-glass/epoxy composite pre-compression rings (PCRs). Those PCRs consist in two sets of three rings, each 5 m in diameter and 337 × 288 mm in cross-section, and are installed at the top and bottom regions to apply a total resultant preload of 70 MN per TF coil equivalent to about 400 MPa hoop stress. Recent developments of composites in the aerospace industry have accelerated the use of advanced composites as primary structural materials. The PCRs represent one of the most challenging composite applications of large dimensions and highly stressed structures operating at 4 K over a long term life. Efficient design of those pre-compression composite structures requires a detailed understanding of both the failure behavior of the structure and the fracture behavior of the material. Due to the inherent difficulties to carry out real scale testing campaign, there is a need to develop simulation tools to predict the multiple complex failure mechanisms in pre-compression rings. A framework contract was placed by ITER Organization with SENER Ingenieria y Sistemas SA to develop multi-scale models representative of the composite structure of the Pre-compression rings based on experimental material data. The predictive modeling based on ABAQUS FEM provides the opportunity both to understand better how PCR composites behave in operating conditions and to support the development of materials by the supplier with enhanced performance to withstand the machine design lifetime of 30,000 cycles. The multi-scale stress analysis has revealed a complete picture of the stress levels within the fiber and the matrix regarding the static and fatigue performance of the rings structure including the presence of a delamination defect of critical size. The analysis results of the composite material demonstrate that the rings performance objectives under all loading and strength conditions are met.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
Griffith, D. Todd; Yoder, Nathanael C.; Resor, Brian; ...
2013-09-19
Offshore wind turbines are an attractive source for clean and renewable energy for reasons including their proximity to population centers and higher capacity factors. One obstacle to the more widespread installation of offshore wind turbines in the USA, however, is that recent projections of offshore operations and maintenance costs vary from two to five times the land-based costs. One way in which these costs could be reduced is through use of a structural health and prognostics management (SHPM) system as part of a condition-based maintenance paradigm with smart loads management. Our paper contributes to the development of such strategies bymore » developing an initial roadmap for SHPM, with application to the blades. One of the key elements of the approach is a multiscale simulation approach developed to identify how the underlying physics of the system are affected by the presence of damage and how these changes manifest themselves in the operational response of a full turbine. A case study of a trailing edge disbond is analysed to demonstrate the multiscale sensitivity of damage approach and to show the potential life extension and increased energy capture that can be achieved using simple changes in the overall turbine control and loads management strategy. Finally, the integration of health monitoring information, economic considerations such as repair costs versus state of health, and a smart loads management methodology provides an initial roadmap for reducing operations and maintenance costs for offshore wind farms while increasing turbine availability and overall profit.« less
Andrew T. Hudak; Roger D. Ottmar; Robert E. Vihnanek; Clint S. Wright
2014-01-01
The RxCADRE research team collected multi-scale measurements of pre-, during, and post-fire variables on operational prescribed fires conducted in 2008, 2011, and 2012 in longleaf pine ecosystems in the southeastern USA. Pre- and post-fire surface fuel loads were characterized in alternating pre- and post-fire clip plots systematically established within burn units....
Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.
Li, Xingyu; Plataniotis, Konstantinos N
2017-01-01
In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.
NASA Technical Reports Server (NTRS)
Houseman, John (Inventor); Voecks, Gerald E. (Inventor)
1986-01-01
A flow through catalytic reactor which selectively catalytically decomposes methanol into a soot free hydrogen rich product gas utilizing engine exhaust at temperatures of 200 to 650 C to provide the heat for vaporizing and decomposing the methanol is described. The reactor is combined with either a spark ignited or compression ignited internal combustion engine or a gas turbine to provide a combustion engine system. The system may be fueled entirely by the hydrogen rich gas produced in the methanol decomposition reactor or the system may be operated on mixed fuels for transient power gain and for cold start of the engine system. The reactor includes a decomposition zone formed by a plurality of elongated cylinders which contain a body of vapor permeable, methanol decomposition catalyst preferably a shift catalyst such as copper-zinc.
NASA Astrophysics Data System (ADS)
Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias
2010-03-01
Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.
NASA Astrophysics Data System (ADS)
Yuan, Jiao-Nan; Wei, Yong-Kai; Zhang, Xiu-Qing; Chen, Xiang-Rong; Ji, Guang-Fu; Kotni, Meena Kumari; Wei, Dong-Qing
2017-10-01
The shock response has a great influence on the design, synthesis, and application of energetic materials in both industrial and military areas. Therefore, the initial decomposition mechanism of bond scission at the atomistic level of condensed-phase α-RDX under shock loading has been studied based on quantum molecular dynamics simulations in combination with a multi-scale shock technique. First, based on the frontier molecular orbital theory, our calculated result shows that the N-NO2 bond is the weakest bond in the α-RDX molecule in the ground state, which may be the initial bond for pyrolysis. Second, the changes of bonds under shock loading are investigated by the changes of structures, kinetic bond lengths, and Laplacian bond orders during the simulation. Also, the variation of thermodynamic properties with time in shocked α-RDX at 10 km/s along the lattice vector a for a timescale of up to 3.5 ps is presented. By analyzing the detailed structural changes of RDX under shock loading, we find that the shocked RDX crystal undergoes a process of compression and rotation, which leads to the C-N bond initial rupture. The time variation of dynamic bond lengths in a shocked RDX crystal is calculated, and the result indicates that the C-N bond is easier to rupture than other bonds. The Laplacian bond orders are used to predict the molecular reactivity and stability. The values of the calculated bond orders show that the C-N bonds are more sensitive than other bonds under shock loading. In a word, the C-N bond scission has been validated as the initial decomposition in a RDX crystal shocked at 10 km/s. Finally, the bond-length criterion has been used to identify individual molecules in the simulation. The distance thresholds up to which two particles are considered direct neighbors and assigned to the same cluster have been tested. The species and density numbers of the initial decomposition products are collected according to the trajectory.
Seidelmann, Katrin N; Scherer-Lorenzen, Michael; Niklaus, Pascal A
2016-01-01
Effects of tree species diversity on decomposition can operate via a multitude of mechanism, including alterations of microclimate by the forest canopy. Studying such effects in natural settings is complicated by the fact that topography also affects microclimate and thus decomposition, so that effects of diversity are more difficult to isolate. Here, we quantified decomposition rates of standard litter in young subtropical forest stands, separating effects of canopy tree species richness and topography, and quantifying their direct and micro-climate-mediated components. Our litterbag study was carried out at two experimental sites of a biodiversity-ecosystem functioning field experiment in south-east China (BEF-China). The field sites display strong topographical heterogeneity and were planted with tree communities ranging from monocultures to mixtures of 24 native subtropical tree species. Litter bags filled with senescent leaves of three native tree species were placed from Nov. 2011 to Oct. 2012 on 134 plots along the tree species diversity gradient. Topographic features were measured for all and microclimate in a subset of plots. Stand species richness, topography and microclimate explained important fractions of the variations in litter decomposition rates, with diversity and topographic effects in part mediated by microclimatic changes. Tree stands were 2-3 years old, but nevertheless tree species diversity explained more variation (54.3%) in decomposition than topography (7.7%). Tree species richness slowed litter decomposition, an effect that slightly depended on litter species identity. A large part of the variance in decomposition was explained by tree species composition, with the presence of three tree species playing a significant role. Microclimate explained 31.4% of the variance in decomposition, and was related to lower soil moisture. Within this microclimate effect, species diversity (without composition) explained 8.9% and topography 34.4% of variance. Topography mainly affected diurnal temperature amplitudes by varying incident solar radiation.
Seidelmann, Katrin N.; Scherer-Lorenzen, Michael; Niklaus, Pascal A.
2016-01-01
Effects of tree species diversity on decomposition can operate via a multitude of mechanism, including alterations of microclimate by the forest canopy. Studying such effects in natural settings is complicated by the fact that topography also affects microclimate and thus decomposition, so that effects of diversity are more difficult to isolate. Here, we quantified decomposition rates of standard litter in young subtropical forest stands, separating effects of canopy tree species richness and topography, and quantifying their direct and micro-climate-mediated components. Our litterbag study was carried out at two experimental sites of a biodiversity-ecosystem functioning field experiment in south-east China (BEF-China). The field sites display strong topographical heterogeneity and were planted with tree communities ranging from monocultures to mixtures of 24 native subtropical tree species. Litter bags filled with senescent leaves of three native tree species were placed from Nov. 2011 to Oct. 2012 on 134 plots along the tree species diversity gradient. Topographic features were measured for all and microclimate in a subset of plots. Stand species richness, topography and microclimate explained important fractions of the variations in litter decomposition rates, with diversity and topographic effects in part mediated by microclimatic changes. Tree stands were 2–3 years old, but nevertheless tree species diversity explained more variation (54.3%) in decomposition than topography (7.7%). Tree species richness slowed litter decomposition, an effect that slightly depended on litter species identity. A large part of the variance in decomposition was explained by tree species composition, with the presence of three tree species playing a significant role. Microclimate explained 31.4% of the variance in decomposition, and was related to lower soil moisture. Within this microclimate effect, species diversity (without composition) explained 8.9% and topography 34.4% of variance. Topography mainly affected diurnal temperature amplitudes by varying incident solar radiation. PMID:27490180
Thermal stability and kinetics of decomposition of ammonium nitrate in the presence of pyrite.
Gunawan, Richard; Zhang, Dongke
2009-06-15
The interaction between ammonium nitrate based industrial explosives and pyrite-rich minerals in mining operations can lead to the occurrence of spontaneous explosion of the explosives. In an effort to provide a scientific basis for safe applications of industrial explosives in reactive mining grounds containing pyrite, ammonium nitrate decomposition, with and without the presence of pyrite, was studied using a simultaneous Differential Scanning Calorimetry and Thermogravimetric Analyser (DSC-TGA) and a gas-sealed isothermal reactor, respectively. The activation energy and the pre-exponential factor of ammonium nitrate decomposition were determined to be 102.6 kJ mol(-1) and 4.55 x 10(7)s(-1) without the presence of pyrite and 101.8 kJ mol(-1) and 2.57 x 10(9)s(-1) with the presence of pyrite. The kinetics of ammonium nitrate decomposition was then used to calculate the critical temperatures for ammonium nitrate decomposition with and without the presence of pyrite, based on the Frank-Kamenetskii model of thermal explosion. It was shown that the presence of pyrite reduces the temperature for, and accelerates the rate of, decomposition of ammonium nitrate. It was further shown that pyrite can significantly reduce the critical temperature of ammonium nitrate decomposition, causing undesired premature detonation of the explosives. The critical temperature also decreases with increasing diameter of the blast holes charged with the explosive. The concept of using the critical temperature as indication of the thermal stability of the explosives to evaluate the risk of spontaneous explosion was verified in the gas-sealed isothermal reactor experiments.
Towards a Multiscale Approach to Cybersecurity Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
The detection of flaws in austenitic welds using the decomposition of the time-reversal operator
NASA Astrophysics Data System (ADS)
Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin
2016-04-01
The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
The detection of flaws in austenitic welds using the decomposition of the time-reversal operator
Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin
2016-01-01
The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683
Towards practical multiscale approach for analysis of reinforced concrete structures
NASA Astrophysics Data System (ADS)
Moyeda, Arturo; Fish, Jacob
2017-12-01
We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.
NASA Astrophysics Data System (ADS)
Schiff, C.; Gershman, D. J.; Avanov, L. A.; Giles, B. L.; Paterson, W. R.; Kriesler, S.; Barrie, A. C.; Rand, D. K.; Gliese, U.; Burch, J.
2017-12-01
Scientifically accurate measurements depend on careful calibration of in-flight instrumentation. We review two years of calibration results for the Fast Plasma Investigation (FPI) electron and ion spectrometers over the MMS fleet. We focus on the operating point calibration by which the operating voltage on each of the 64 spectrometers is set to best balance between gain, signal loss, and anode-to-anode cross talk. In addition, we map the calibration and housekeeping telemetry to infer charge extracted evolution from the microchannel plates and, subsequently, the project lifetime of the instrumentation.
Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics
Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.
2014-01-01
Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314
NASA Astrophysics Data System (ADS)
Liu, Y.; Wu, W.; Zhang, Y.; Kucera, P. A.; Liu, Y.; Pan, L.
2012-12-01
Weather forecasting in the Middle East is challenging because of its complicated geographical nature including massive coastal area and heterogeneous land, and regional spare observational network. Strong air-land-sea interactions form multi-scale weather regimes in the area, which require a numerical weather prediction model capable of properly representing multi-scale atmospheric flow with appropriate initial conditions. The WRF-based Real-Time Four Dimensional Data Assimilation (RTFDDA) system is one of advanced multi-scale weather analysis and forecasting facilities developed at the Research Applications Laboratory (RAL) of NCAR. The forecasting system is applied for the Middle East with careful configuration. To overcome the limitation of the very sparsely available conventional observations in the region, we develop a hybrid data assimilation algorithm combining RTFDDA and WRF-3DVAR, which ingests remote sensing data from satellites and radar. This hybrid data assimilation blends Newtonian nudging FDDA and 3DVAR technology to effectively assimilate both conventional observations and remote sensing measurements and provide improved initial conditions for the forecasting system. For brevity, the forecasting system is called RTF3H (RTFDDA-3DVAR Hybrid). In this presentation, we will discuss the hybrid data assimilation algorithm, and its implementation, and the applications for high-impact weather events in the area. Sensitivity studies are conducted to understand the strength and limitations of this hybrid data assimilation algorithm.
NASA Astrophysics Data System (ADS)
Gamzina, Diana
Diana Gamzina March 2016 Mechanical and Aerospace Engineering Multiscale Thermo-Mechanical Design and Analysis of High Frequency and High Power Vacuum Electron Devices Abstract A methodology for performing thermo-mechanical design and analysis of high frequency and high average power vacuum electron devices is presented. This methodology results in a "first-pass" engineering design directly ready for manufacturing. The methodology includes establishment of thermal and mechanical boundary conditions, evaluation of convective film heat transfer coefficients, identification of material options, evaluation of temperature and stress field distributions, assessment of microscale effects on the stress state of the material, and fatigue analysis. The feature size of vacuum electron devices operating in the high frequency regime of 100 GHz to 1 THz is comparable to the microstructure of the materials employed for their fabrication. As a result, the thermo-mechanical performance of a device is affected by the local material microstructure. Such multiscale effects on the stress state are considered in the range of scales from about 10 microns up to a few millimeters. The design and analysis methodology is demonstrated on three separate microwave devices: a 95 GHz 10 kW cw sheet beam klystron, a 263 GHz 50 W long pulse wide-bandwidth sheet beam travelling wave tube, and a 346 GHz 1 W cw backward wave oscillator.
Structure-based multiscale approach for identification of interaction partners of PDZ domains.
Tiwari, Garima; Mohanty, Debasisa
2014-04-28
PDZ domains are peptide recognition modules which mediate specific protein-protein interactions and are known to have a complex specificity landscape. We have developed a novel structure-based multiscale approach which identifies crucial specificity determining residues (SDRs) of PDZ domains from explicit solvent molecular dynamics (MD) simulations on PDZ-peptide complexes and uses these SDRs in combination with knowledge-based scoring functions for proteomewide identification of their interaction partners. Multiple explicit solvent simulations ranging from 5 to 50 ns duration have been carried out on 28 PDZ-peptide complexes with known binding affinities. MM/PBSA binding energy values calculated from these simulations show a correlation coefficient of 0.755 with the experimental binding affinities. On the basis of the SDRs of PDZ domains identified by MD simulations, we have developed a simple scoring scheme for evaluating binding energies for PDZ-peptide complexes using residue based statistical pair potentials. This multiscale approach has been benchmarked on a mouse PDZ proteome array data set by calculating the binding energies for 217 different substrate peptides in binding pockets of 64 different mouse PDZ domains. Receiver operating characteristic (ROC) curve analysis indicates that, the area under curve (AUC) values for binder vs nonbinder classification by our structure based method is 0.780. Our structure based method does not require experimental PDZ-peptide binding data for training.
A multi-scale framework to link remotely sensed metrics with socioeconomic data
NASA Astrophysics Data System (ADS)
Watmough, Gary; Svenning, Jens-Christian; Palm, Cheryl; Sullivan, Clare; Danylo, Olha; McCallum, Ian
2017-04-01
There is increasing interest in the use of remotely sensed satellite data for estimating human poverty as it can bridge data gaps that prevent fine scale monitoring of development goals across large areas. The ways in which metrics derived from satellite imagery are linked with socioeconomic data are crucial for accurate estimation of poverty. Yet, to date, approaches in the literature linking satellite metrics with socioeconomic data are poorly characterized. Typically, approaches use a GIS approach such as circular buffer zones around a village or household or an administrative boundary such as a district or census enumeration area. These polygons are then used to extract environmental data from satellite imagery and related to the socioeconomic data in statistical analyses. The use of a single polygon to link environment and socioeconomic data is inappropriate in coupled human-natural systems as processes operate over multiple scales. Human interactions with the environment occur at multiple levels from individual (household) access to agricultural plots adjacent to homes, to communal access to common pool resources (CPR) such as forests at the village level. Here, we present a multi-scale framework that explicitly considers how people use the landscape. The framework is presented along with a case study example in Kenya. The multi-scale approach could enhance the modelling of human-environment interactions which will have important consequences for monitoring the sustainable development goals for human livelihoods and biodiversity conservation.
A concurrent multiscale micromorphic molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaofan, E-mail: shaofan@berkeley.edu; Tong, Qi
2015-04-21
In this work, we have derived a multiscale micromorphic molecular dynamics (MMMD) from first principle to extend the (Andersen)-Parrinello-Rahman molecular dynamics to mesoscale and continuum scale. The multiscale micromorphic molecular dynamics is a con-current three-scale dynamics that couples a fine scale molecular dynamics, a mesoscale micromorphic dynamics, and a macroscale nonlocal particle dynamics together. By choosing proper statistical closure conditions, we have shown that the original Andersen-Parrinello-Rahman molecular dynamics is the homogeneous and equilibrium case of the proposed multiscale micromorphic molecular dynamics. In specific, we have shown that the Andersen-Parrinello-Rahman molecular dynamics can be rigorously formulated and justified from firstmore » principle, and its general inhomogeneous case, i.e., the three scale con-current multiscale micromorphic molecular dynamics can take into account of macroscale continuum mechanics boundary condition without the limitation of atomistic boundary condition or periodic boundary conditions. The discovered multiscale scale structure and the corresponding multiscale dynamics reveal a seamless transition from atomistic scale to continuum scale and the intrinsic coupling mechanism among them based on first principle formulation.« less
Light-weight analyzer for odor recognition
Vass, Arpad A; Wise, Marcus B
2014-05-20
The invention provides a light weight analyzer, e.g., detector, capable of locating clandestine graves. The detector utilizes the very specific and unique chemicals identified in the database of human decompositional odor. This detector, based on specific chemical compounds found relevant to human decomposition, is the next step forward in clandestine grave detection and will take the guess-work out of current methods using canines and ground-penetrating radar, which have historically been unreliable. The detector is self contained, portable and built for field use. Both visual and auditory cues are provided to the operator.
Leung, Kevin; Budzien, Joanne L
2010-07-07
The decomposition of ethylene carbonate (EC) during the initial growth of solid-electrolyte interphase (SEI) films at the solvent-graphitic anode interface is critical to lithium ion battery operations. Ab initio molecular dynamics simulations of explicit liquid EC/graphite interfaces are conducted to study these electrochemical reactions. We show that carbon edge terminations are crucial at this stage, and that achievable experimental conditions can lead to surprisingly fast EC breakdown mechanisms, yielding decomposition products seen in experiments but not previously predicted.
Multiscale Materials Modeling Workshop Summary
DOT National Transportation Integrated Search
2013-12-01
This report summarizes a 2-day workshop held to share information on multiscale material modeling. The aim was to gain expert feedback on the state of the art and identify Exploratory Advanced Research (EAR) Program opportunities for multiscale mater...
NASA Astrophysics Data System (ADS)
Cho, Hyesung; Moon Kim, Sang; Sik Kang, Yun; Kim, Junsoo; Jang, Segeun; Kim, Minhyoung; Park, Hyunchul; Won Bang, Jung; Seo, Soonmin; Suh, Kahp-Yang; Sung, Yung-Eun; Choi, Mansoo
2015-09-01
The production of multiscale architectures is of significant interest in materials science, and the integration of those structures could provide a breakthrough for various applications. Here we report a simple yet versatile strategy that allows for the LEGO-like integrations of microscale membranes by quantitatively controlling the oxygen inhibition effects of ultraviolet-curable materials, leading to multilevel multiscale architectures. The spatial control of oxygen concentration induces different curing contrasts in a resin allowing the selective imprinting and bonding at different sides of a membrane, which enables LEGO-like integration together with the multiscale pattern formation. Utilizing the method, the multilevel multiscale Nafion membranes are prepared and applied to polymer electrolyte membrane fuel cell. Our multiscale membrane fuel cell demonstrates significant enhancement of performance while ensuring mechanical robustness. The performance enhancement is caused by the combined effect of the decrease of membrane resistance and the increase of the electrochemical active surface area.
A complete categorization of multiscale models of infectious disease systems.
Garira, Winston
2017-12-01
Modelling of infectious disease systems has entered a new era in which disease modellers are increasingly turning to multiscale modelling to extend traditional modelling frameworks into new application areas and to achieve higher levels of detail and accuracy in characterizing infectious disease systems. In this paper we present a categorization framework for categorizing multiscale models of infectious disease systems. The categorization framework consists of five integration frameworks and five criteria. We use the categorization framework to give a complete categorization of host-level immuno-epidemiological models (HL-IEMs). This categorization framework is also shown to be applicable in categorizing other types of multiscale models of infectious diseases beyond HL-IEMs through modifying the initial categorization framework presented in this study. Categorization of multiscale models of infectious disease systems in this way is useful in bringing some order to the discussion on the structure of these multiscale models.
Fiedler Trees for Multiscale Surface Analysis
2011-08-01
geometry pro- cessing. In ACM SIGGRAPH ASIA Course Notes, 2009. [22] Peter Lindstrom . Out-of-core simplification of large polygonal models. In SIGGRAPH ’00...Graphics Forum, volume 27, pages 1341–1348. Blackwell Science Ltd, Osney Mead, Oxford, OX 2 0 EL, UK,, 2008. [29] Martin Reuter. Hierarchical shape...009-0278-1, 2009. [30] Martin Reuter, Silvia Biasotti, Daniela Giorgi, Giuseppe Patanè, and Michela Spagnuolo. Discrete laplace-beltrami operators for
Space-time evolution of cataclasis in carbonate fault zones
NASA Astrophysics Data System (ADS)
Ferraro, Francesco; Grieco, Donato Stefano; Agosta, Fabrizio; Prosser, Giacomo
2018-05-01
The present contribution focuses on the micro-mechanisms associated to cataclasis of both calcite- and dolomite-rich fault rocks. This work combines field and laboratory data of carbonate fault cores currently exposed in central and southern Italy. By first deciphering the main fault rock textures, their spatial distribution, crosscutting relationships and multi-scale dimensional properties, the relative timing of Intragranular Extensional Fracturing (IEF), chipping, and localized shear is inferred. IEF was predominant within already fractured carbonates, forming coarse and angular rock fragments, and likely lasted for a longer period within the dolomitic fault rocks. Chipping occurred in both lithologies, and was activated by grain rolling forming minute, sub-rounded survivor grains embedded in a powder-like carbonate matrix. The largest fault zones, which crosscut either limestones or dolostones, were subjected to localized shear and, eventually, to flash temperature increase which caused thermal decomposition of calcite within narrow (cm-thick) slip zones. Results are organized in a synoptic panel including the main dimensional properties of survivor grains. Finally, a conceptual model of the time-dependent evolution of cataclastic deformation in carbonate rocks is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yongfeng
2016-09-01
U3Si2 and FeCrAl have been proposed as fuel and cladding concepts, respectively, for accident tolerance fuels with higher tolerance to accident scenarios compared to UO2. However, a lot of key physics and material properties regarding their in-pile performance are yet to be explored. To accelerate the understanding and reduce the cost of experimental studies, multiscale modeling and simulation are used to develop physics-based materials models to assist engineering scale fuel performance modeling. In this report, the lower-length-scale efforts in method and material model development supported by the Accident Tolerance Fuel (ATF) high-impact-problem (HIP) under the NEAMS program are summarized. Significantmore » progresses have been made regarding interatomic potential, phase field models for phase decomposition and gas bubble formation, and thermal conductivity for U3Si2 fuel, and precipitation in FeCrAl cladding. The accomplishments are very useful by providing atomistic and mesoscale tools, improving the current understanding, and delivering engineering scale models for these two ATF concepts.« less
Atomistic potentials based energy flux integral criterion for dynamic adiabatic shear banding
NASA Astrophysics Data System (ADS)
Xu, Yun; Chen, Jun
2015-02-01
The energy flux integral criterion based on atomistic potentials within the framework of hyperelasticity-plasticity is proposed for dynamic adiabatic shear banding (ASB). System Helmholtz energy decomposition reveals that the dynamic influence on the integral path dependence is originated from the volumetric strain energy and partial deviatoric strain energy, and the plastic influence only from the rest part of deviatoric strain energy. The concept of critical shear banding energy is suggested for describing the initiation of ASB, which consists of the dynamic recrystallization (DRX) threshold energy and the thermal softening energy. The criterion directly relates energy flux to the basic physical processes that induce shear instability such as dislocation nucleations and multiplications, without introducing ad-hoc parameters in empirical constitutive models. It reduces to the classical path independent J-integral for quasi-static loading and elastic solids. The atomistic-to-continuum multiscale coupling method is used to simulate the initiation of ASB. Atomic configurations indicate that DRX induced microstructural softening may be essential to the dynamic shear localization and hence the initiation of ASB.
Micro-Macro Simulation of Viscoelastic Fluids in Three Dimensions
NASA Astrophysics Data System (ADS)
Rüttgers, Alexander; Griebel, Michael
2012-11-01
The development of the chemical industry resulted in various complex fluids that cannot be correctly described by classical fluid mechanics. For instance, this includes paint, engine oils with polymeric additives and toothpaste. We currently perform multiscale viscoelastic flow simulations for which we have coupled our three-dimensional Navier-Stokes solver NaSt3dGPF with the stochastic Brownian configuration field method on the micro-scale. In this method, we represent a viscoelastic fluid as a dumbbell system immersed in a three-dimensional Newtonian liquid which leads to a six-dimensional problem in space. The approach requires large computational resources and therefore depends on an efficient parallelisation strategy. Our flow solver is parallelised with a domain decomposition approach using MPI. It shows excellent scale-up results for up to 128 processors. In this talk, we present simulation results for viscoelastic fluids in square-square contractions due to their relevance for many engineering applications such as extrusion. Another aspect of the talk is the parallel implementation in NaSt3dGPF and the parallel scale-up and speed-up behaviour.
Ground roll attenuation by synchrosqueezed curvelet transform
NASA Astrophysics Data System (ADS)
Liu, Zhao; Chen, Yangkang; Ma, Jianwei
2018-04-01
Ground roll is a type of coherent noise in land seismic data that has low frequency, low velocity and high amplitude. It damages reflection events that contain important information about subsurface structures, hence the removal of ground roll is a crucial step in seismic data processing. A suitable transform is needed for removal of ground roll. Curvelet transform is an effective sparse transform that optimally represents seismic events. In addition, the curvelets can provide a multiscale and multidirectional decomposition of the input data in time-frequency and angular domain, which can help distinguish between ground roll and useful signals. In this paper, we apply synchrosqueezed curvelet transform (SSCT) for ground roll attenuation. The synchrosqueezing technique in SSCT is used to precisely reallocate the energy of local wave vectors in order to separate ground roll from the original data with higher resolution and higher fidelity. Examples of synthetic and field seismic data reveal that SSCT performs well in the suppression of aliased and non-aliased ground roll while preserving reflection waves, in comparison with high-pass filtering, wavelet and curvelet methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.
Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Multistage reaction pathways in detonating high explosives
NASA Astrophysics Data System (ADS)
Li, Ying; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; CACS Collaboration; ALCF Team
2015-06-01
Atomistic mechanisms underlying the reaction time and intermediate reaction products of detonating high explosives far from equilibrium have been elusive. This is because detonation is one of the hardest multiscale physics problems, in which diverse length and time scales play important roles. Here, large spatiotemporal-scale reactive molecular dynamics simulations validated by quantum molecular dynamics simulations reveal a two-stage reaction mechanism during the detonation of cyclotrimethylenetrinitramine crystal. Rapid production of N2 and H2O within 10 ps is followed by delayed production of CO molecules beyond ns. We found that further decomposition towards the final products is inhibited by the formation of large metastable carbon- and oxygen-rich clusters with fractal geometry. In addition, we found distinct uni-molecular and intermolecular reaction pathways, respectively, for the rapid N2 and H2O productions. This work was supported by the Office of Naval Research Grant No. N000014-12-1-0555 and the Basic Research Program of Defense Threat Reduction Agency (DTRA) Grant No. HDTRA1-08-1-0036. All the simulations were performed at USC and Argonne LCF.
Multiscale Monte Carlo equilibration: Two-color QCD with two fermion flavors
Detmold, William; Endres, Michael G.
2016-12-02
In this study, we demonstrate the applicability of a recently proposed multiscale thermalization algorithm to two-color quantum chromodynamics (QCD) with two mass-degenerate fermion flavors. The algorithm involves refining an ensemble of gauge configurations that had been generated using a renormalization group (RG) matched coarse action, thereby producing a fine ensemble that is close to the thermalized distribution of a target fine action; the refined ensemble is subsequently rethermalized using conventional algorithms. Although the generalization of this algorithm from pure Yang-Mills theory to QCD with dynamical fermions is straightforward, we find that in the latter case, the method is susceptible tomore » numerical instabilities during the initial stages of rethermalization when using the hybrid Monte Carlo algorithm. We find that these instabilities arise from large fermion forces in the evolution, which are attributed to an accumulation of spurious near-zero modes of the Dirac operator. We propose a simple strategy for curing this problem, and demonstrate that rapid thermalization--as probed by a variety of gluonic and fermionic operators--is possible with the use of this solution. Also, we study the sensitivity of rethermalization rates to the RG matching of the coarse and fine actions, and identify effective matching conditions based on a variety of measured scales.« less
NASA Astrophysics Data System (ADS)
Rincon, F.; Roudier, T.; Schekochihin, A. A.; Rieutord, M.
2017-03-01
The Sun provides us with the only spatially well-resolved astrophysical example of turbulent thermal convection. While various aspects of solar photospheric turbulence, such as granulation (one-Megameter horizontal scale), are well understood, the questions of the physical origin and dynamical organization of larger-scale flows, such as the 30-Megameters supergranulation and flows deep in the solar convection zone, remain largely open in spite of their importance for solar dynamics and magnetism. Here, we present a new critical global observational characterization of multiscale photospheric flows and subsequently formulate an anisotropic extension of the Bolgiano-Obukhov theory of hydrodynamic stratified turbulence that may explain several of their distinctive dynamical properties. Our combined analysis suggests that photospheric flows in the horizontal range of scales between supergranulation and granulation have a typical vertical correlation scale of 2.5 to 4 Megameters and operate in a strongly anisotropic, self-similar, nonlinear, buoyant dynamical regime. While the theory remains speculative at this stage, it lends itself to quantitative comparisons with future high-resolution acoustic tomography of subsurface layers and advanced numerical models. Such a validation exercise may also lead to new insights into the asymptotic dynamical regimes in which other, unresolved turbulent anisotropic astrophysical fluid systems supporting waves or instabilities operate.
An Apparatus for High-Pressure Thermogravimetry.
sulfate pentahydrate to show relatively unchanged decomposition rates with increased pressure. (Author)...Thermograms are presented to show typical operation and performance of the apparatus, using aniline to demonstrate retardation of evaporation and copper
A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media
NASA Astrophysics Data System (ADS)
Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.
2017-12-01
Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Alteration of Asian lacquer: in-depth insight using a physico-chemical multiscale approach.
Le Hô, Anne-Solenn; Duhamel, Chloé; Daher, Céline; Bellot-Gurlet, Ludovic; Paris, Céline; Regert, Martine; Sablier, Michel; André, Guilhem; Desroches, Jean-Paul; Dumas, Paul
2013-10-07
Oriental lacquer has been used in Asian countries for thousands of years as a durable and aesthetic coating material for its adhesive, consolidating, protective and decorative properties. Although these objects are made from an unusual material in Occident, Western museum collections host many lacquerwares. Curators, restorers and scientists are daily confronted with questions of their conservation and their alteration. The characterization of their conservation state is usually assessed through visual observations. However deterioration often starts at the microscopic level and cannot be detected by a simple visual inspection. Often, ageing and deterioration of artworks are connected to physical, mechanical and chemical transformations. Thus new insight into alteration of lacquer involves the monitoring of macro-, microscopic and molecular modifications, and this can be assessed from physico-chemical measurements. Non-invasive (microtopography and Scanning Electron Microscopy - SEM) and micro-invasive (infrared micro-spectroscopy using a synchrotron source - SR-μFTIR) investigations were performed to study the degradation processes of lacquers and evaluate their level of alteration. In particular, spectral decomposition and fitting procedure were performed in the 1820-1520 cm(-1) region to follow the shift of the C=O and C=C band positions during lacquer ageing. The present work proves the potential of this physico-chemical approach in conservation studies of lacquers and in the quantification of the state of alteration. It evidences chemical phenomena of alteration such as oxidation and decomposition of a lacquer polymeric network. It also demonstrates for the first time the degradation front of artificially aged lacquer and the chemical imaging of a more than 2000 years old archaeological lacquer by using SR-μFTIR.
Conservative tightly-coupled simulations of stochastic multiscale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2016-05-15
Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less
Alternative transitions between existing representations in multi-scale maps
NASA Astrophysics Data System (ADS)
Dumont, Marion; Touya, Guillaume; Duchêne, Cécile
2018-05-01
Map users may have issues to achieve multi-scale navigation tasks, as cartographic objects may have various representations across scales. We assume that adding intermediate representations could be one way to reduce the differences between existing representations, and to ease the transitions across scales. We consider an existing multiscale map on the scale range from 1 : 25k to 1 : 100k scales. Based on hypotheses about intermediate representations design, we build custom multi-scale maps with alternative transitions. We will conduct in a next future a user evaluation to compare the efficiency of these alternative maps for multi-scale navigation. This paper discusses the hypotheses and production process of these alternative maps.
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
NASA Astrophysics Data System (ADS)
Finn, Conor; Lizier, Joseph
2018-04-01
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.
Metagenomic analysis of antibiotic resistance genes (ARGs) during refuse decomposition.
Liu, Xi; Yang, Shu; Wang, Yangqing; Zhao, He-Ping; Song, Liyan
2018-04-12
Landfill is important reservoirs of residual antibiotics and antibiotic resistance genes (ARGs), but the mechanism of landfill application influence on antibiotic resistance remains unclear. Although refuse decomposition plays a crucial role in landfill stabilization, its impact on the antibiotic resistance has not been well characterized. To better understand the impact, we studied the dynamics of ARGs and the bacterial community composition during refuse decomposition in a bench-scale bioreactor after long term operation (265d) based on metagenomics analysis. The total abundances of ARGs increased from 431.0ppm in the initial aerobic phase (AP) to 643.9ppm in the later methanogenic phase (MP) during refuse decomposition, suggesting that application of landfill for municipal solid waste (MSW) treatment may elevate the level of ARGs. A shift from drug-specific (bacitracin, tetracycline and sulfonamide) resistance to multidrug resistance was observed during the refuse decomposition and was driven by a shift of potential bacteria hosts. The elevated abundance of Pseudomonas mainly contributed to the increasing abundance of multidrug ARGs (mexF and mexW). Accordingly, the percentage of ARGs encoding an efflux pump increased during refuse decomposition, suggesting that potential bacteria hosts developed this mechanism to adapt to the carbon and energy shortage when biodegradable substances were depleted. Overall, our findings indicate that the use of landfill for MSW treatment increased antibiotic resistance, and demonstrate the need for a comprehensive investigation of antibiotic resistance in landfill. Copyright © 2018. Published by Elsevier B.V.
LLNL demonstration of liquid gun propellant destruction in a 0.1 gallon per minute scale reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cena, R.J.; Thorsness, C.B.; Coburn, T.T.
1994-06-01
The Lawrence Livermore National Laboratory (LLNL) has built and operated a pilot plant for processing oil shale using recirculating hot solids. This pilot plant, was adapted in 1993 to demonstrate the feasibility of decomposing a liquid gun propellant (LGP), LP XM46, a mixture of 76% HAN (NH{sub 3}OHNO{sub 3}) and 24% TEAN (HOCH{sub 2}CH{sub 2}){sub 3} NHNO{sub 3} diluted 1:3 in water. In the Livermore process, the LPG is thermally treated in a moving packed bed of ceramic spheres, where TEAN and HAN decompose, forming a suite of gases including: methane, carbon monoxide, oxygen, nitrogen oxides, ammonia and molecular nitrogen.more » The ceramic spheres are circulated and heated, providing the energy required for thermal decomposition. The authors performed an extended one day (8 hour) test of the solids recirculation system, with continuous injection of approximately 0.1 gal/min of LGP, diluted 1:3 in water, for a period of eight hours. The apparatus operated smoothly over the course of the eight hour run during which 144 kg of solution was processed, containing 36 kg of LGP. Continuous on-line gas analysis was invaluable in tracking the progress of the experiment and quantifying the decomposition products. The reactor was operated in two modes, a {open_quotes}Pyrolysis{close_quotes} mode, where decomposition products were removed from the moving bed reactor exit, passing through condensers to a flare, and in a {open_quotes}Combustion{close_quotes} mode, where the products were oxidized in air lift pipe prior to exiting the system. In the {open_quotes}Pyrolysis{close_quotes} mode, driver gases were recycled producing a small, concentrated stream of decomposition products. In the {open_quotes}Combustion mode{close_quotes}, the driver gases were not recycled, resulting in 40 times higher gas flow rates and correspondingly lower concentrations of nitrogen bearing gases.« less
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
2018-03-27
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
On bipartite pure-state entanglement structure in terms of disentanglement
NASA Astrophysics Data System (ADS)
Herbut, Fedor
2006-12-01
Schrödinger's disentanglement [E. Schrödinger, Proc. Cambridge Philos. Soc. 31, 555 (1935)], i.e., remote state decomposition, as a physical way to study entanglement, is carried one step further with respect to previous work in investigating the qualitative side of entanglement in any bipartite state vector. Remote measurement (or, equivalently, remote orthogonal state decomposition) from previous work is generalized to remote linearly independent complete state decomposition both in the nonselective and the selective versions. The results are displayed in terms of commutative square diagrams, which show the power and beauty of the physical meaning of the (antiunitary) correlation operator inherent in the given bipartite state vector. This operator, together with the subsystem states (reduced density operators), constitutes the so-called correlated subsystem picture. It is the central part of the antilinear representation of a bipartite state vector, and it is a kind of core of its entanglement structure. The generalization of previously elaborated disentanglement expounded in this article is a synthesis of the antilinear representation of bipartite state vectors, which is reviewed, and the relevant results of [Cassinelli et al., J. Math. Anal. Appl. 210, 472 (1997)] in mathematical analysis, which are summed up. Linearly independent bases (finite or infinite) are shown to be almost as useful in some quantum mechanical studies as orthonormal ones. Finally, it is shown that linearly independent remote pure-state preparation carries the highest probability of occurrence. This singles out linearly independent remote influence from all possible ones.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions
Friedman, Ariell; Pizarro, Oscar; Williams, Stefan B.; Johnson-Roberson, Matthew
2012-01-01
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements. PMID:23251370
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
Decomposition of forest products buried in landfills.
Wang, Xiaoming; Padgett, Jennifer M; Powell, John S; Barlaz, Morton A
2013-11-01
The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C+H) loss of up to 38%, while loss for the other wood types was 0-10% in most samples. The C+H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27gOCg(-1) dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Orr, R. M.; Sims, H. E.; Taylor, R. J.
2015-10-01
Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or 'finishing' processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO2 product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles.
Kumar, Nitin; Radin, Maxwell D.; Wood, Brandon C.; ...
2015-04-13
A viable Li/O 2 battery will require the development of stable electrolytes that do not continuously decompose during cell operation. In some recent experiments it is suggested that reactions occurring at the interface between the liquid electrolyte and the solid lithium peroxide (Li 2O 2) discharge phase are a major contributor to these instabilities. To clarify the mechanisms associated with these reactions, a variety of atomistic simulation techniques, classical Monte Carlo, van der Waals-augmented density functional theory, ab initio molecular dynamics, and various solvation models, are used to study the initial decomposition of the common electrolyte solvent, dimethoxyethane (DME), onmore » surfaces of Li 2O 2. Comparisons are made between the two predominant Li 2O 2 surface charge states by calculating decomposition pathways on peroxide-terminated (O 2 2–) and superoxide-terminated (O 2 1–) facets. For both terminations, DME decomposition proceeds exothermically via a two-step process comprised of hydrogen abstraction (H-abstraction) followed by nucleophilic attack. In the first step, abstracted H dissociates a surface O 2 dimer, and combines with a dissociated oxygen to form a hydroxide ion (OH –). In the remaining surface oxygen then attacks the DME, resulting in a DME fragment that is strongly bound to the Li 2O 2 surface. DME decomposition is predicted to be more exothermic on the peroxide facet; nevertheless, the rate of DME decomposition is faster on the superoxide termination. The impact of solvation (explicit vs implicit) and an applied electric field on the reaction energetics are investigated. Finally, our calculations suggest that surface-mediated electrolyte decomposition should out-pace liquid-phase processes such as solvent auto-oxidation by dissolved O 2.« less
Decomposition of the polynomial kernel of arbitrary higher spin Dirac operators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eelbode, D., E-mail: David.Eelbode@ua.ac.be; Raeymaekers, T., E-mail: Tim.Raeymaekers@UGent.be; Van der Jeugt, J., E-mail: Joris.VanderJeugt@UGent.be
2015-10-15
In a series of recent papers, we have introduced higher spin Dirac operators, which are generalisations of the classical Dirac operator. Whereas the latter acts on spinor-valued functions, the former acts on functions taking values in arbitrary irreducible half-integer highest weight representations for the spin group. In this paper, we describe how the polynomial kernel spaces of such operators decompose in irreducible representations of the spin group. We will hereby make use of results from representation theory.
Safety and environmental aspects of organic coolants for fusion facilities
NASA Astrophysics Data System (ADS)
Natalizio, A.; Hollies, R. E.; Gierszewski, P.
1993-06-01
Organic coolants, such as OS-84, offer unique advantages for fusion reactor applications. These advantages are with respect to both reactor operation and safety. The key operational advantage is a coolant that can provide high temperature (350-400°C) at modest pressure (2-4 MPa). These temperatures are needed for conditioning the plasma-facing components and, in reactors, for achieving high thermodynamic conversion efficiencies (>40%). The key safety advantage of organic coolants is the low vapor pressure, which significantly reduces the containment pressurization transient (relative to water) following a loss of coolant event. Also, from an occupational dose viewpoint, organic coolants significantly reduce corrosion and erosion inside the cooling system and consequently reduce the quantity of activation products deposited in cooling system equipment. On the negative side, organic coolants undergo both pyrolytic and radiolytic decomposition, and are flammable. While the decomposition rate can be minimized by coolant system design (by reducing coolant inventories exposed to neutron flux and to high temperatures), decomposition products are formed and these degrade the coolant properties. Both heavy compounds and light gases are produced from the decomposition process, and both must be removed to maintain adequate coolant properties. As these hydrocarbons may become tritiated by permeation, or activated through impurities, their disposal could create an environmental concern. Because of this potential waste disposal problem, consideration has been given to the recycling of both the light and heavy products, thereby reducing the quantity of waste to be disposed. Preliminary assessments made for various fusion reactor designs, including ITER, suggest that it is feasible to use organic coolants for several applications. These applications range from first wall and blanket coolant (the most demanding with respect to decomposition), to shield and vacuum vessel cooling, to an intermediate cooling loop removing heat from a liquid metal loop and transferring it to a steam generator or heat exchanger.
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
NASA Astrophysics Data System (ADS)
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
2012-03-01
Simulation Simulation is a flexible tool for modeling airport operations , which has made the method a staple for airport systems analysts. Animation...be derived to define the character- istics of the airport terminal and describe the nature of the systems [sic] operation ”, which makes discrete...This system decomposition method, however, disregards the effects of network structure on performance measures. Real-life processes do not operate
Novel Multiscale Modeling Tool Applied to Pseudomonas aeruginosa Biofilm Formation
Biggs, Matthew B.; Papin, Jason A.
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool. PMID:24147108
Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.
Biggs, Matthew B; Papin, Jason A
2013-01-01
Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.
Zanin, Massimiliano; Chorbev, Ivan; Stres, Blaz; Stalidzans, Egils; Vera, Julio; Tieri, Paolo; Castiglione, Filippo; Groen, Derek; Zheng, Huiru; Baumbach, Jan; Schmid, Johannes A; Basilio, José; Klimek, Peter; Debeljak, Nataša; Rozman, Damjana; Schmidt, Harald H H W
2017-12-05
Systems medicine holds many promises, but has so far provided only a limited number of proofs of principle. To address this road block, possible barriers and challenges of translating systems medicine into clinical practice need to be identified and addressed. The members of the European Cooperation in Science and Technology (COST) Action CA15120 Open Multiscale Systems Medicine (OpenMultiMed) wish to engage the scientific community of systems medicine and multiscale modelling, data science and computing, to provide their feedback in a structured manner. This will result in follow-up white papers and open access resources to accelerate the clinical translation of systems medicine. © The Author 2017. Published by Oxford University Press.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
Decomposition of forest products buried in landfills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiaoming, E-mail: xwang25@ncsu.edu; Padgett, Jennifer M.; Powell, John S.
Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal wastemore » components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than previously reported.« less
The 300 Area Integrated Field Research Challenge Quality Assurance Project Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, N. J.
Pacific Northwest National Laboratory and a group of expert collaborators are using the U.S. Department of Energy Hanford Site 300 Area uranium plume within the footprint of the 300-FF-5 groundwater operable unit as a site for an Integrated Field-Scale Subsurface Research Challenge (IFRC). The IFRC is entitled Multi-Scale Mass Transfer Processes Controlling Natural Attenuation and Engineered Remediation: An IFRC Focused on the Hanford Site 300 Area Uranium Plume Project. The theme is investigation of multi-scale mass transfer processes. A series of forefront science questions on mass transfer are posed for research that relate to the effect of spatial heterogeneities; themore » importance of scale; coupled interactions between biogeochemical, hydrologic, and mass transfer processes; and measurements/approaches needed to characterize and model a mass transfer-dominated system. This Quality Assurance Project Plan provides the quality assurance requirements and processes that will be followed by the 300 Area IFRC Project. This plan is designed to be used exclusively by project staff.« less
Fast Plasma Investigation for Magnetospheric Multiscale
NASA Technical Reports Server (NTRS)
Pollock, C.; Moore, T.; Coffey, V.; Dorelli J.; Giles, B.; Adrian, M.; Chandler, M.; Duncan, C.; Figueroa-Vinas, A.; Garcia, K.;
2016-01-01
The Fast Plasma Investigation (FPI) was developed for flight on the Magnetospheric Multiscale (MMS) mission to measure the differential directional flux of magnetospheric electrons and ions with unprecedented time resolution to resolve kinetic-scale plasma dynamics. This increased resolution has been accomplished by placing four dual 180-degree top hat spectrometers for electrons and four dual 180-degree top hat spectrometers for ions around the periphery of each of four MMS spacecraft. Using electrostatic field-of-view deflection, the eight spectrometers for each species together provide 4pi-sr-field-of-view with, at worst, 11.25-degree sample spacing. Energy/charge sampling is provided by swept electrostatic energy/charge selection over the range from 10 eVq to 30000 eVq. The eight dual spectrometers on each spacecraft are controlled and interrogated by a single block redundant Instrument Data Processing Unit, which in turn interfaces to the observatory's Instrument Suite Central Instrument Data processor. This paper described the design of FPI, its ground and in-flight calibration, its operational concept, and its data products.
Dalmazzone, Silvana; La Notte, Alessandra
2013-11-30
Extending the application of integrated environmental and economic accounts from the national to the local level of government serves several purposes. They can be used not only as an instrument for communicating on the state of the environment and reporting the results of policies, but also as an operational tool - for setting the objectives and designing policies - if made available to the local authorities who have responsibility over the administration of natural resources, land use and conservation policies. The aim of the paper is to test the feasibility of applying hybrid flow accounts at the intermediate and local government levels. As an illustration, NAMEA for air emissions and wastes is applied to a Region, a Province and a Municipality, thus covering the three nested levels of local government in Italy. The study identifies the main issues raised by multi-scale environmental accounting and provides an applied discussion of feasible solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wang, Qixuan; Oh, Ji Won; Lee, Hye-Lim; Dhar, Anukriti; Peng, Tao; Ramos, Raul; Guerrero-Juarez, Christian Fernando; Wang, Xiaojie; Zhao, Ran; Cao, Xiaoling; Le, Jonathan; Fuentes, Melisa A; Jocoy, Shelby C; Rossi, Antoni R; Vu, Brian; Pham, Kim; Wang, Xiaoyang; Mali, Nanda Maya; Park, Jung Min; Choi, June-Hyug; Lee, Hyunsu; Legrand, Julien M D; Kandyba, Eve; Kim, Jung Chul; Kim, Moonkyu; Foley, John; Yu, Zhengquan; Kobielak, Krzysztof; Andersen, Bogi; Khosrotehrani, Kiarash; Nie, Qing; Plikus, Maksim V
2017-07-11
The control principles behind robust cyclic regeneration of hair follicles (HFs) remain unclear. Using multi-scale modeling, we show that coupling inhibitors and activators with physical growth of HFs is sufficient to drive periodicity and excitability of hair regeneration. Model simulations and experimental data reveal that mouse skin behaves as a heterogeneous regenerative field, composed of anatomical domains where HFs have distinct cycling dynamics. Interactions between fast-cycling chin and ventral HFs and slow-cycling dorsal HFs produce bilaterally symmetric patterns. Ear skin behaves as a hyper-refractory domain with HFs in extended rest phase. Such hyper-refractivity relates to high levels of BMP ligands and WNT antagonists, in part expressed by ear-specific cartilage and muscle. Hair growth stops at the boundaries with hyper-refractory ears and anatomically discontinuous eyelids, generating wave-breaking effects. We posit that similar mechanisms for coupled regeneration with dominant activator, hyper-refractory, and wave-breaker regions can operate in other actively renewing organs.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2006-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
Wirth, Brian D.; Hu, Xunxiang; Kohnert, Aaron; ...
2015-03-02
Exposure of metallic structural materials to irradiation environments results in significant microstructural evolution, property changes, and performance degradation, which limits the extended operation of current generation light water reactors and restricts the design of advanced fission and fusion reactors. Further, it is well recognized that these irradiation effects are a classic example of inherently multiscale phenomena and that the mix of radiation-induced features formed and the corresponding property degradation depend on a wide range of material and irradiation variables. This inherently multiscale evolution emphasizes the importance of closely integrating models with high-resolution experimental characterization of the evolving radiation-damaged microstructure. Lastly,more » this article provides a review of recent models of the defect microstructure evolution in irradiated body-centered cubic materials, which provide good agreement with experimental measurements, and presents some outstanding challenges, which will require coordinated high-resolution characterization and modeling to resolve.« less
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
Optofluidic fabrication for 3D-shaped particles
NASA Astrophysics Data System (ADS)
Paulsen, Kevin S.; di Carlo, Dino; Chung, Aram J.
2015-04-01
Complex three-dimensional (3D)-shaped particles could play unique roles in biotechnology, structural mechanics and self-assembly. Current methods of fabricating 3D-shaped particles such as 3D printing, injection moulding or photolithography are limited because of low-resolution, low-throughput or complicated/expensive procedures. Here, we present a novel method called optofluidic fabrication for the generation of complex 3D-shaped polymer particles based on two coupled processes: inertial flow shaping and ultraviolet (UV) light polymerization. Pillars within fluidic platforms are used to deterministically deform photosensitive precursor fluid streams. The channels are then illuminated with patterned UV light to polymerize the photosensitive fluid, creating particles with multi-scale 3D geometries. The fundamental advantages of optofluidic fabrication include high-resolution, multi-scalability, dynamic tunability, simple operation and great potential for bulk fabrication with full automation. Through different combinations of pillar configurations, flow rates and UV light patterns, an infinite set of 3D-shaped particles is available, and a variety are demonstrated.
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
Cho, Hyesung; Moon Kim, Sang; Sik Kang, Yun; Kim, Junsoo; Jang, Segeun; Kim, Minhyoung; Park, Hyunchul; Won Bang, Jung; Seo, Soonmin; Suh, Kahp-Yang; Sung, Yung-Eun; Choi, Mansoo
2015-01-01
The production of multiscale architectures is of significant interest in materials science, and the integration of those structures could provide a breakthrough for various applications. Here we report a simple yet versatile strategy that allows for the LEGO-like integrations of microscale membranes by quantitatively controlling the oxygen inhibition effects of ultraviolet-curable materials, leading to multilevel multiscale architectures. The spatial control of oxygen concentration induces different curing contrasts in a resin allowing the selective imprinting and bonding at different sides of a membrane, which enables LEGO-like integration together with the multiscale pattern formation. Utilizing the method, the multilevel multiscale Nafion membranes are prepared and applied to polymer electrolyte membrane fuel cell. Our multiscale membrane fuel cell demonstrates significant enhancement of performance while ensuring mechanical robustness. The performance enhancement is caused by the combined effect of the decrease of membrane resistance and the increase of the electrochemical active surface area. PMID:26412619
Microphysics in Multi-scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2012-01-01
Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketusky, E.; Subramanian, K.
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include:more » (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing dissolution equilibrium, and then decomposed to {le} 100 Parts per Million (ppm) oxalate. Since AOP technology largely originated on using ultraviolet (UV) light as a primary catalyst, decomposition of the spent oxalic acid, well exposed to a medium pressure mercury vapor light was considered the benchmark. However, with multi-valent metals already contained in the feed, and maintenance of the UV light a concern; testing was conducted to evaluate the impact from removing the UV light. Using current AOP terminology, the test without the UV light would likely be considered an ozone based, dark, ferrioxalate type, decomposition process. Specifically, as part of the testing, the impacts from the following were investigated: (1) Importance of the UV light on the decomposition rates when decomposing 1 wt% spent oxalic acid; (2) Impact of increasing the oxalic acid strength from 1 to 2.5 wt% on the decomposition rates; and (3) For F-area testing, the advantage of increasing the spent oxalic acid flowrate from 40 L/min (liters/minute) to 50 L/min during decomposition of the 2.5 wt% spent oxalic acid. The results showed that removal of the UV light (from 1 wt% testing) slowed the decomposition rates in both the F & H testing. Specifically, for F-Area Strike 1, the time increased from about 6 hours to 8 hours. In H-Area, the impact was not as significant, with the time required for Strike 1 to be decomposed to less than 100 ppm increasing slightly, from 5.4 to 6.4 hours. For the spent 2.5 wt% oxalic acid decomposition tests (all) without the UV light, the F-area decompositions required approx. 10 to 13 hours, while the corresponding required H-Area decompositions times ranged from 10 to 21 hours. For the 2.5 wt% F-Area sludge, the increased availability of iron likely caused the increased decomposition rates compared to the 1 wt% oxalic acid based tests. In addition, for the F-testing, increasing the recirculation flow rates from 40 liter/minute to 50 liter/minute resulted in an increased decomposition rate, suggesting a better use of ozone.« less
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Multiscale analysis of neural spike trains.
Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin
2014-01-30
This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.
Multiscale multichroic focal planes for measurements of the cosmic microwave background
NASA Astrophysics Data System (ADS)
Cukierman, Ari; Lee, Adrian T.; Raum, Christopher; Suzuki, Aritoki; Westbrook, Benjamin
2018-01-01
We report on the development of multiscale multichroic focal planes for measurements of the cosmic microwave background (CMB). A multichroic focal plane, i.e., one that consists of pixels that are simultaneously sensitive in multiple frequency bands, is an efficient architecture for increasing the sensitivity of an experiment as well as for disentangling the contamination due to galactic foregrounds, which is increasingly becoming the limiting factor in extracting cosmological information from CMB measurements. To achieve these goals, it is necessary to observe across a broad frequency range spanning roughly 30-350 GHz. For this purpose, the Berkeley CMB group has been developing multichroic pixels consisting of planar superconducting sinuous antennas coupled to extended hemispherical lenslets, which operate at sub-Kelvin temperatures. The sinuous antennas, microwave circuitry and the transition-edge-sensor (TES) bolometers to which they are coupled are integrated in a single lithographed wafer.We describe the design, fabrication, testing and performance of multichroic pixels with bandwidths of 3:1 and 4:1 across the entire frequency range of interest. Additionally, we report on a demonstration of multiscale pixels, i.e., pixels whose effective size changes as a function of frequency. This property keeps the beam width approximately constant across all frequencies, which in turn allows the sensitivity of the experiment to be optimal in every frequency band. We achieve this by creating phased arrays from neighboring lenslet-coupled sinuous antennas, where the size of each phased array is chosen independently for each frequency band. We describe the microwave circuitry in detail as well as the benefits of a multiscale architecture, e.g., mitigation of beam non-idealities, reduced readout requirements, etc. Finally, we discuss the design and fabrication of the detector modules and focal-plane structures including cryogenic readout components, which enable the integration of our devices in current and future CMB experiments.
NASA Technical Reports Server (NTRS)
Cubbage, J. M.; Mercer, C. E.
1977-01-01
Results from an investigation of the effects of the operation of a combined turbojet/scramjet propulsion system on the longitudinal aerodynamic characteristics of a 1/60-scale hypersonic airbreathing launch vehicle configuration are presented. Decomposition products of hydrogen peroxide were used for simulation of the propulsion system exhaust.
Ink-jet printing of silver metallization for photovoltaics
NASA Technical Reports Server (NTRS)
Vest, R. W.
1986-01-01
The status of the ink-jet printing program at Purdue University is described. The drop-on-demand printing system was modified to use metallo-organic decomposition (MOD) inks. Also, an IBM AT computer was integrated into the ink-jet printer system to provide operational functions and contact pattern configuration. The integration of the ink-jet printing system, problems encountered, and solutions derived were described in detail. The status of ink-jet printing using a MOD ink was discussed. The ink contained silver neodecanate and bismuth 2-ethylhexanoate dissolved in toluene; the MOD ink decomposition products being 99 wt% AG, and 1 wt% Bi.
Empirical mode decomposition for analyzing acoustical signals
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2005-01-01
The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.
The decomposition of wood products in landfills in Sydney, Australia.
Ximenes, F A; Gardner, W D; Cowie, A L
2008-11-01
Three landfill sites that had been closed for 19, 29 and 46 years and had been operated under different management systems were excavated in Sydney. The mean moisture content of the wood samples ranged from 41.6% to 66.8%. The wood products recovered were identified to species, and their carbon, cellulose, hemicellulose and lignin concentration were determined and compared to those of matched samples of the same species. No significant loss of dry mass was measured in wood products buried for 19 and 29 years, but where refuse had been buried for 46 years, the measured loss of carbon (as a percentage of dry biomass) was 8.7% for hardwoods and 9.1% for softwoods, equating to 18% and 17% of their original carbon content, respectively. The results indicate that published decomposition factors based on laboratory research significantly overestimate the decomposition of wood products in landfill.
NASA Astrophysics Data System (ADS)
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education
ERIC Educational Resources Information Center
Schwalbe, Michelle Kristin
2010-01-01
This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…
A weak Galerkin generalized multiscale finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-03-31
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
A weak Galerkin generalized multiscale finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
In this study, we propose a general framework for weak Galerkin generalized multiscale (WG-GMS) finite element method for the elliptic problems with rapidly oscillating or high contrast coefficients. This general WG-GMS method features in high order accuracy on general meshes and can work with multiscale basis derived by different numerical schemes. A special case is studied under this WG-GMS framework in which the multiscale basis functions are obtained by solving local problem with the weak Galerkin finite element method. Convergence analysis and numerical experiments are obtained for the special case.
Multi-Scale Scattering Transform in Music Similarity Measuring
NASA Astrophysics Data System (ADS)
Wang, Ruobai
Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.
NASA Astrophysics Data System (ADS)
Scukins, A.; Nerukh, D.; Pavlov, E.; Karabasov, S.; Markesteijn, A.
2015-09-01
A multiscale Molecular Dynamics/Hydrodynamics implementation of the 2D Mercedes Benz (MB or BN2D) [1] water model is developed and investigated. The concept and the governing equations of multiscale coupling together with the results of the two-way coupling implementation are reported. The sensitivity of the multiscale model for obtaining macroscopic and microscopic parameters of the system, such as macroscopic density and velocity fluctuations, radial distribution and velocity autocorrelation functions of MB particles, is evaluated. Critical issues for extending the current model to large systems are discussed.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Liu, Daiming; Wang, Qingkang; Wang, Qing
2018-05-01
Surface texturing is of great significance in light trapping for solar cells. Herein, the multiscale texture, consisting of microscale pyramids and nanoscale porous arrangement, was fabricated on crystalline Si by KOH etching and Ag-assisted HF etching processes and subsequently replicated onto glass with high fidelity by UV nanoimprint method. Light trapping of the multiscale texture was studied by spectral (reflectance, haze ratio) characterizations. Results reveal the multiscale texture provides the broadband reflection reducing, the highlighted light scattering and the additional self-cleaning behaviors. Compared with bare cell, the multiscale textured micromorph cell achieves a 4% relative increase in power conversion efficiency. This surface texturing route paves a promising way for developing low-cost, large-scale and high-efficiency solar applications.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less