Quantifying complexity of financial short-term time series by composite multiscale entropy measure
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun
2015-05-01
It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.
Use of Multiscale Entropy to Facilitate Artifact Detection in Electroencephalographic Signals
Mariani, Sara; Borges, Ana F. T.; Henriques, Teresa; Goldberger, Ary L.; Costa, Madalena D.
2016-01-01
Electroencephalographic (EEG) signals present a myriad of challenges to analysis, beginning with the detection of artifacts. Prior approaches to noise detection have utilized multiple techniques, including visual methods, independent component analysis and wavelets. However, no single method is broadly accepted, inviting alternative ways to address this problem. Here, we introduce a novel approach based on a statistical physics method, multiscale entropy (MSE) analysis, which quantifies the complexity of a signal. We postulate that noise corrupted EEG signals have lower information content, and, therefore, reduced complexity compared with their noise free counterparts. We test the new method on an open-access database of EEG signals with and without added artifacts due to electrode motion. PMID:26738116
Multivariate multiscale entropy of financial markets
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun
2017-11-01
In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121
2012-09-15
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring
Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Objective Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Methods Six MSPE algorithms—derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis—were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. Results CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. Conclusions MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales. PMID:27723803
A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.
Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro
2016-01-01
Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation entropies may fail to describe the characteristics of EEG at high decomposition scales.
Introduction and application of the multiscale coefficient of variation analysis.
Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh
2017-10-01
Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Multiscale Shannon entropy and its application in the stock market
NASA Astrophysics Data System (ADS)
Gu, Rongbao
2017-10-01
In this paper, we perform a multiscale entropy analysis on the Dow Jones Industrial Average Index using the Shannon entropy. The stock index shows the characteristic of multi-scale entropy that caused by noise in the market. The entropy is demonstrated to have significant predictive ability for the stock index in both long-term and short-term, and empirical results verify that noise does exist in the market and can affect stock price. It has important implications on market participants such as noise traders.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
NASA Astrophysics Data System (ADS)
Xu, Xuefang; Qiao, Zijian; Lei, Yaguo
2018-03-01
The presence of repetitive transients in vibration signals is a typical symptom of local faults of rotating machinery. Infogram was developed to extract the repetitive transients from vibration signals based on Shannon entropy. Unfortunately, the Shannon entropy is maximized for random processes and unable to quantify the repetitive transients buried in heavy random noise. In addition, the vibration signals always contain multiple intrinsic oscillatory modes due to interaction and coupling effects between machine components. Under this circumstance, high values of Shannon entropy appear in several frequency bands or high value of Shannon entropy doesn't appear in the optimal frequency band, and the infogram becomes difficult to interpret. Thus, it also becomes difficult to select the optimal frequency band for extracting the repetitive transients from the whole frequency bands. To solve these problems, multiscale fractional order entropy (MSFE) infogram is proposed in this paper. With the help of MSFE infogram, the complexity and nonlinear signatures of the vibration signals can be evaluated by quantifying spectral entropy over a range of scales in fractional domain. Moreover, the similarity tolerance of MSFE infogram is helpful for assessing the regularity of signals. A simulation and two experiments concerning a locomotive bearing and a wind turbine gear are used to validate the MSFE infogram. The results demonstrate that the MSFE infogram is more robust to the heavy noise than infogram and the high value is able to only appear in the optimal frequency band for the repetitive transient extraction.
2017-09-01
to develop a multi-scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated noise induced hearing loss. In...scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated noise-induced hearing loss. Such hearing loss...project was to develop a multi-scale model, together with relevant supporting experimental data, to describe jet fuel exacerbated NIHL. Herein we
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2000-01-01
Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.
MEMD-enhanced multivariate fuzzy entropy for the evaluation of complexity in biomedical signals.
Azami, Hamed; Smith, Keith; Escudero, Javier
2016-08-01
Multivariate multiscale entropy (mvMSE) has been proposed as a combination of the coarse-graining process and multivariate sample entropy (mvSE) to quantify the irregularity of multivariate signals. However, both the coarse-graining process and mvSE may not be reliable for short signals. Although the coarse-graining process can be replaced with multivariate empirical mode decomposition (MEMD), the relative instability of mvSE for short signals remains a problem. Here, we address this issue by proposing the multivariate fuzzy entropy (mvFE) with a new fuzzy membership function. The results using white Gaussian noise show that the mvFE leads to more reliable and stable results, especially for short signals, in comparison with mvSE. Accordingly, we propose MEMD-enhanced mvFE to quantify the complexity of signals. The characteristics of brain regions influenced by partial epilepsy are investigated by focal and non-focal electroencephalogram (EEG) time series. In this sense, the proposed MEMD-enhanced mvFE and mvSE are employed to discriminate focal EEG signals from non-focal ones. The results demonstrate the MEMD-enhanced mvFE values have a smaller coefficient of variation in comparison with those obtained by the MEMD-enhanced mvSE, even for long signals. The results also show that the MEMD-enhanced mvFE has better performance to quantify focal and non-focal signals compared with multivariate multiscale permutation entropy.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
NASA Astrophysics Data System (ADS)
Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang
2017-04-01
Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.
Enhancement of IVR images by combining an ICA shrinkage filter with a multi-scale filter
NASA Astrophysics Data System (ADS)
Chen, Yen-Wei; Matsuo, Kiyotaka; Han, Xianhua; Shimizu, Atsumoto; Shibata, Koichi; Mishina, Yukio; Mukuta, Yoshihiro
2007-11-01
Interventional Radiology (IVR) is an important technique to visualize and diagnosis the vascular disease. In real medical application, a weak x-ray radiation source is used for imaging in order to reduce the radiation dose, resulting in a low contrast noisy image. It is important to develop a method to smooth out the noise while enhance the vascular structure. In this paper, we propose to combine an ICA Shrinkage filter with a multiscale filter for enhancement of IVR images. The ICA shrinkage filter is used for noise reduction and the multiscale filter is used for enhancement of vascular structure. Experimental results show that the quality of the image can be dramatically improved without any blurring in edge by the proposed method. Simultaneous noise reduction and vessel enhancement have been achieved.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Poisson-Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain.
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-03-29
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson-Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson-Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods.
Poisson–Gaussian Noise Analysis and Estimation for Low-Dose X-ray Images in the NSCT Domain
Lee, Sangyoon; Lee, Min Seok; Kang, Moon Gi
2018-01-01
The noise distribution of images obtained by X-ray sensors in low-dosage situations can be analyzed using the Poisson and Gaussian mixture model. Multiscale conversion is one of the most popular noise reduction methods used in recent years. Estimation of the noise distribution of each subband in the multiscale domain is the most important factor in performing noise reduction, with non-subsampled contourlet transform (NSCT) representing an effective method for scale and direction decomposition. In this study, we use artificially generated noise to analyze and estimate the Poisson–Gaussian noise of low-dose X-ray images in the NSCT domain. The noise distribution of the subband coefficients is analyzed using the noiseless low-band coefficients and the variance of the noisy subband coefficients. The noise-after-transform also follows a Poisson–Gaussian distribution, and the relationship between the noise parameters of the subband and the full-band image is identified. We then analyze noise of actual images to validate the theoretical analysis. Comparison of the proposed noise estimation method with an existing noise reduction method confirms that the proposed method outperforms traditional methods. PMID:29596335
A multiscale filter for noise reduction of low-dose cone beam projections.
Yao, Weiguang; Farr, Jonathan B
2015-08-21
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-09-10
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-01-01
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532
Dynamical glucometry: Use of multiscale entropy analysis in diabetes
NASA Astrophysics Data System (ADS)
Costa, Madalena D.; Henriques, Teresa; Munshi, Medha N.; Segal, Alissa R.; Goldberger, Ary L.
2014-09-01
Diabetes mellitus (DM) is one of the world's most prevalent medical conditions. Contemporary management focuses on lowering mean blood glucose values toward a normal range, but largely ignores the dynamics of glucose fluctuations. We probed analyte time series obtained from continuous glucose monitor (CGM) sensors. We show that the fluctuations in CGM values sampled every 5 min are not uncorrelated noise. Next, using multiscale entropy analysis, we quantified the complexity of the temporal structure of the CGM time series from a group of elderly subjects with type 2 DM and age-matched controls. We further probed the structure of these CGM time series using detrended fluctuation analysis. Our findings indicate that the dynamics of glucose fluctuations from control subjects are more complex than those of subjects with type 2 DM over time scales ranging from about 5 min to 5 h. These findings support consideration of a new framework, dynamical glucometry, to guide mechanistic research and to help assess and compare therapeutic interventions, which should enhance complexity of glucose fluctuations and not just lower mean and variance of blood glucose levels.
NASA Astrophysics Data System (ADS)
Costa, M.; Priplata, A. A.; Lipsitz, L. A.; Wu, Z.; Huang, N. E.; Goldberger, A. L.; Peng, C.-K.
2007-03-01
Pathologic states are associated with a loss of dynamical complexity. Therefore, therapeutic interventions that increase physiologic complexity may enhance health status. Using multiscale entropy analysis, we show that the postural sway dynamics of healthy young and healthy elderly subjects are more complex than that of elderly subjects with a history of falls. Application of subsensory noise to the feet has been demonstrated to improve postural stability in the elderly. We next show that this therapy significantly increases the multiscale complexity of sway fluctuations in healthy elderly subjects. Quantification of changes in dynamical complexity of biologic variability may be the basis of a new approach to assessing risk and to predicting the efficacy of clinical interventions, including noise-based therapies.
A multiscale filter for noise reduction of low-dose cone beam projections
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Farr, Jonathan B.
2015-08-01
The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024 × 768 pixels.
Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir
2018-06-01
There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.
A Multiscale pipeline for the search of string-induced CMB anisotropies
NASA Astrophysics Data System (ADS)
Vafaei Sadr, A.; Movahed, S. M. S.; Farhang, M.; Ringeval, C.; Bouchet, F. R.
2018-03-01
We propose a multiscale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the cosmic microwave background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a CS contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately 10 per cent of the sky, and for different string tensions Gμ. On noiseless sky maps with an angular resolution of 0.9 arcmin, we show that our pipeline detects CSs with Gμ as low as Gμ ≳ 4.3 × 10-10. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to Gμ ≳ 1.2 × 10-7.
NASA Astrophysics Data System (ADS)
Chen, X.
2016-12-01
This study present a multi-scale approach combining Mode Decomposition and Variance Matching (MDVM) method and basic process of Point-by-Point Regression (PPR) method. Different from the widely applied PPR method, the scanning radius for each grid box, were re-calculated considering the impact from topography (i.e. mean altitudes and fluctuations). Thus, appropriate proxy records were selected to be candidates for reconstruction. The results of this multi-scale methodology could not only provide the reconstructed gridded temperature, but also the corresponding uncertainties of the four typical timescales. In addition, this method can bring in another advantage that spatial distribution of the uncertainty for different scales could be quantified. To interpreting the necessity of scale separation in calibration, with proxy records location over Eastern Asia, we perform two sets of pseudo proxy experiments (PPEs) based on different ensembles of climate model simulation. One consist of 7 simulated results by 5 models (BCC-CSM1-1, CSIRO-MK3L-1-2, HadCM3, MPI-ESM-P, and Giss-E2-R) of the "past1000" simulation from Coupled Model Intercomparison Project Phase 5. The other is based on the simulations of Community Earth System Model Last Millennium Ensemble (CESM-LME). The pseudo-records network were obtained by adding the white noise with signal-to-noise ratio (SNR) increasing from 0.1 to 1.0 to the simulated true state and the locations mainly followed the PAGES-2k network in Asia. Totally, 400 years (1601-2000) simulation was used for calibration and 600 years (1001-1600) for verification. The reconstructed results were evaluated by three metrics 1) root mean squared error (RMSE), 2) correlation and 3) reduction of error (RE) score. The PPE verification results have shown that, in comparison with ordinary linear calibration method (variance matching), the RMSE and RE score of PPR-MDVM are improved, especially for the area with sparse proxy records. To be noted, in some periods with large volcanic activities, the RMSE of MDVM get larger than VM for higher SNR cases. It should be inferred that the volcanic eruptions might blur the intrinsic characteristics of multi-scales variabilities of the climate system and the MDVM method would show less advantage in that case.
Understanding perception of active noise control system through multichannel EEG analysis.
Bagha, Sangeeta; Tripathy, R K; Nanda, Pranati; Preetam, C; Das, Debi Prasad
2018-06-01
In this Letter, a method is proposed to investigate the effect of noise with and without active noise control (ANC) on multichannel electroencephalogram (EEG) signal. The multichannel EEG signal is recorded during different listening conditions such as silent, music, noise, ANC with background noise and ANC with both background noise and music. The multiscale analysis of EEG signal of each channel is performed using the discrete wavelet transform. The multivariate multiscale matrices are formulated based on the sub-band signals of each EEG channel. The singular value decomposition is applied to the multivariate matrices of multichannel EEG at significant scales. The singular value features at significant scales and the extreme learning machine classifier with three different activation functions are used for classification of multichannel EEG signal. The experimental results demonstrate that, for ANC with noise and ANC with noise and music classes, the proposed method has sensitivity values of 75.831% ( p < 0.001 ) and 99.31% ( p < 0.001 ), respectively. The method has an accuracy value of 83.22% for the classification of EEG signal with music and ANC with music as stimuli. The important finding of this study is that by the introduction of ANC, music can be better perceived by the human brain.
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás
2017-12-01
The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of front, which cannot be accounted for by the coarse-grained model. Such fluctuations have non-trivial effects on the wave velocity. Beyond the development of a new hybrid method, we thus conclude that birth-rate fluctuations are central to a quantitatively accurate description of invasive phenomena such as tumour growth.
Multiscale Poincaré plots for visualizing the structure of heartbeat time series.
Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L
2016-02-09
Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.
NASA Astrophysics Data System (ADS)
Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei
2017-03-01
The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.
Avti, Pramod K; Hu, Song; Favazza, Christopher; Mikos, Antonios G; Jansen, John A; Shroyer, Kenneth R; Wang, Lihong V; Sitharaman, Balaji
2012-01-01
In the present study, the efficacy of multi-scale photoacoustic microscopy (PAM) was investigated to detect, map, and quantify trace amounts [nanograms (ng) to micrograms (µg)] of SWCNTs in a variety of histological tissue specimens consisting of cancer and benign tissue biopsies (histological specimens from implanted tissue engineering scaffolds). Optical-resolution (OR) and acoustic-resolution (AR)--Photoacoustic microscopy (PAM) was employed to detect, map and quantify the SWCNTs in a variety of tissue histological specimens and compared with other optical techniques (bright-field optical microscopy, Raman microscopy, near infrared (NIR) fluorescence microscopy). Both optical-resolution and acoustic-resolution PAM, allow the detection and quantification of SWCNTs in histological specimens with scalable spatial resolution and depth penetration. The noise-equivalent detection sensitivity to SWCNTs in the specimens was calculated to be as low as ∼7 pg. Image processing analysis further allowed the mapping, distribution, and quantification of the SWCNTs in the histological sections. The results demonstrate the potential of PAM as a promising imaging technique to detect, map, and quantify SWCNTs in histological specimens, and could complement the capabilities of current optical and electron microscopy techniques in the analysis of histological specimens containing SWCNTs.
Multiscale permutation entropy analysis of laser beam wandering in isotropic turbulence.
Olivares, Felipe; Zunino, Luciano; Gulich, Damián; Pérez, Darío G; Rosso, Osvaldo A
2017-10-01
We have experimentally quantified the temporal structural diversity from the coordinate fluctuations of a laser beam propagating through isotropic optical turbulence. The main focus here is on the characterization of the long-range correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. To fulfill this goal, a laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing the symbolic technique based on ordinal patterns to estimate the well-known permutation entropy. We show that the permutation entropy estimations at multiple time scales evidence an interplay between different dynamical behaviors. More specifically, a crossover between two different scaling regimes is observed. We confirm a transition from an integrated stochastic process contaminated with electronic noise to a fractional Brownian motion with a Hurst exponent H=5/6 as the sampling time increases. Besides, we are able to quantify, from the estimated entropy, the amount of electronic noise as a function of the turbulence strength. We have also demonstrated that these experimental observations are in very good agreement with numerical simulations of noisy fractional Brownian motions with a well-defined crossover between two different scaling regimes.
NASA Astrophysics Data System (ADS)
Shi, Wenzhong; Deng, Susu; Xu, Wenbing
2018-02-01
For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (< 5 years) landslides and approximately 35% of historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should be filtered using a filtering strategy based on supplementary information provided by expert knowledge or other data sources.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Analysis of crude oil markets with improved multiscale weighted permutation entropy
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Liu, Cheng
2018-03-01
Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.
NASA Astrophysics Data System (ADS)
Zimoń, M. J.; Prosser, R.; Emerson, D. R.; Borg, M. K.; Bray, D. J.; Grinberg, L.; Reese, J. M.
2016-11-01
Filtering of particle-based simulation data can lead to reduced computational costs and enable more efficient information transfer in multi-scale modelling. This paper compares the effectiveness of various signal processing methods to reduce numerical noise and capture the structures of nano-flow systems. In addition, a novel combination of these algorithms is introduced, showing the potential of hybrid strategies to improve further the de-noising performance for time-dependent measurements. The methods were tested on velocity and density fields, obtained from simulations performed with molecular dynamics and dissipative particle dynamics. Comparisons between the algorithms are given in terms of performance, quality of the results and sensitivity to the choice of input parameters. The results provide useful insights on strategies for the analysis of particle-based data and the reduction of computational costs in obtaining ensemble solutions.
MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.
Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K
2015-04-01
Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yazzie, K.E.; Williams, J.J.; Phillips, N.C.
2012-08-15
Sn-rich (Pb-free) alloys serve as electrical and mechanical interconnects in electronic packaging. It is critical to quantify the microstructures of Sn-rich alloys to obtain a fundamental understanding of their properties. In this work, the intermetallic precipitates in Sn-3.5Ag and Sn-0.7Cu, and globular lamellae in Sn-37Pb solder joints were visualized and quantified using 3D X-ray synchrotron tomography and focused ion beam (FIB) tomography. 3D reconstructions were analyzed to extract statistics on particle size and spatial distribution. In the Sn-Pb alloy the interconnectivity of Sn-rich and Pb-rich constituents was quantified. It will be shown that multiscale characterization using 3D X-ray and FIBmore » tomography enabled the characterization of the complex morphology, distribution, and statistics of precipitates and contiguous phases over a range of length scales. - Highlights: Black-Right-Pointing-Pointer Multiscale characterization by X-ray synchrotron and focused ion beam tomography. Black-Right-Pointing-Pointer Characterized microstructural features in several Sn-based alloys. Black-Right-Pointing-Pointer Quantified size, fraction, and clustering of microstructural features.« less
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
Mikos, Antonios G.; Jansen, John A.; Shroyer, Kenneth R.; Wang, Lihong V.; Sitharaman, Balaji
2012-01-01
Aims In the present study, the efficacy of multi-scale photoacoustic microscopy (PAM) was investigated to detect, map, and quantify trace amounts [nanograms (ng) to micrograms (µg)] of SWCNTs in a variety of histological tissue specimens consisting of cancer and benign tissue biopsies (histological specimens from implanted tissue engineering scaffolds). Materials and Methods Optical-resolution (OR) and acoustic-resolution (AR) - Photoacoustic microscopy (PAM) was employed to detect, map and quantify the SWCNTs in a variety of tissue histological specimens and compared with other optical techniques (bright-field optical microscopy, Raman microscopy, near infrared (NIR) fluorescence microscopy). Results Both optical-resolution and acoustic-resolution PAM, allow the detection and quantification of SWCNTs in histological specimens with scalable spatial resolution and depth penetration. The noise-equivalent detection sensitivity to SWCNTs in the specimens was calculated to be as low as ∼7 pg. Image processing analysis further allowed the mapping, distribution, and quantification of the SWCNTs in the histological sections. Conclusions The results demonstrate the potential of PAM as a promising imaging technique to detect, map, and quantify SWCNTs in histological specimens, and could complement the capabilities of current optical and electron microscopy techniques in the analysis of histological specimens containing SWCNTs. PMID:22496892
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
Accurate feature detection and estimation using nonlinear and multiresolution analysis
NASA Astrophysics Data System (ADS)
Rudin, Leonid; Osher, Stanley
1994-11-01
A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.
A data-driven approach for denoising GNSS position time series
NASA Astrophysics Data System (ADS)
Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin
2017-12-01
Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, W.; Zhu, W. D.; Smith, S. A.
While structural damage detection based on flexural vibration shapes, such as mode shapes and steady-state response shapes under harmonic excitation, has been well developed, little attention is paid to that based on longitudinal vibration shapes that also contain damage information. This study originally formulates a slope vibration shape for damage detection in bars using longitudinal vibration shapes. To enhance noise robustness of the method, a slope vibration shape is transformed to a multiscale slope vibration shape in a multiscale domain using wavelet transform, which has explicit physical implication, high damage sensitivity, and noise robustness. These advantages are demonstrated in numericalmore » cases of damaged bars, and results show that multiscale slope vibration shapes can be used for identifying and locating damage in a noisy environment. A three-dimensional (3D) scanning laser vibrometer is used to measure the longitudinal steady-state response shape of an aluminum bar with damage due to reduced cross-sectional dimensions under harmonic excitation, and results show that the method can successfully identify and locate the damage. Slopes of longitudinal vibration shapes are shown to be suitable for damage detection in bars and have potential for applications in noisy environments.« less
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
The Total Variation Regularized L1 Model for Multiscale Decomposition
2006-01-01
L1 fidelity term, and presented impressive and successful applications of the TV-L1 model to impulsive noise removal and outlier identification. She...used to filter 1D signal [3], to remove impulsive (salt-n- pepper) noise [35], to extract textures from natural images [45], to remove varying...34, 35, 36] discovery of the usefulness of this model for removing impul- sive noise , Chan and Esedoglu’s [17] further analysis of this model, and a
Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data
NASA Astrophysics Data System (ADS)
Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth
2012-03-01
The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref
NASA Astrophysics Data System (ADS)
Hu, Bingbing; Li, Bing
2016-02-01
It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.
NASA Astrophysics Data System (ADS)
Liu, Weixin; Jin, Ningde; Han, Yunfeng; Ma, Jing
2018-06-01
In the present study, multi-scale entropy algorithm was used to characterise the complex flow phenomena of turbulent droplets in high water-cut oil-water two-phase flow. First, we compared multi-scale weighted permutation entropy (MWPE), multi-scale approximate entropy (MAE), multi-scale sample entropy (MSE) and multi-scale complexity measure (MCM) for typical nonlinear systems. The results show that MWPE presents satisfied variability with scale and anti-noise ability. Accordingly, we conducted an experiment of vertical upward oil-water two-phase flow with high water-cut and collected the signals of a high-resolution microwave resonant sensor, based on which two indexes, the entropy rate and mean value of MWPE, were extracted. Besides, the effects of total flow rate and water-cut on these two indexes were analysed. Our researches show that MWPE is an effective method to uncover the dynamic instability of oil-water two-phase flow with high water-cut.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Rigoli, Lillian M.; Holman, Daniel; Spivey, Michael J.; Kello, Christopher T.
2014-01-01
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior. PMID:25309389
Reducing the uncertainty in the fidelity of seismic imaging results
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Zou, Z.
2017-12-01
A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.
Robert S. Arkle; David S. Pilliod; Steven E. Hanser; Matthew L. Brooks; Jeanne C. Chambers; James B. Grace; Kevin C. Knutson; David A. Pyke; Justin L. Welty; Troy A. Wirth
2014-01-01
A recurrent challenge in the conservation of wide-ranging, imperiled species is understanding which habitats to protect and whether we are capable of restoring degraded landscapes. For Greater Sage-grouse (Centrocercus urophasianus), a species of conservation concern in the western United States, we approached this problem by developing multi-scale empirical models of...
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
Quantifying noise in optical tweezers by allan variance.
Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B
2009-07-20
Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.
Application of optimized multiscale mathematical morphology for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao
2017-04-01
In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.
Structural Damage Detection Using Slopes of Longitudinal Vibration Shapes
Xu, W.; Zhu, W. D.; Smith, S. A.; ...
2016-03-18
While structural damage detection based on flexural vibration shapes, such as mode shapes and steady-state response shapes under harmonic excitation, has been well developed, little attention is paid to that based on longitudinal vibration shapes that also contain damage information. This study originally formulates a slope vibration shape for damage detection in bars using longitudinal vibration shapes. To enhance noise robustness of the method, a slope vibration shape is transformed to a multiscale slope vibration shape in a multiscale domain using wavelet transform, which has explicit physical implication, high damage sensitivity, and noise robustness. These advantages are demonstrated in numericalmore » cases of damaged bars, and results show that multiscale slope vibration shapes can be used for identifying and locating damage in a noisy environment. A three-dimensional (3D) scanning laser vibrometer is used to measure the longitudinal steady-state response shape of an aluminum bar with damage due to reduced cross-sectional dimensions under harmonic excitation, and results show that the method can successfully identify and locate the damage. Slopes of longitudinal vibration shapes are shown to be suitable for damage detection in bars and have potential for applications in noisy environments.« less
An infrared small target detection method based on multiscale local homogeneity measure
NASA Astrophysics Data System (ADS)
Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen
2018-05-01
Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.
Supporting statement for community study of human response to aircraft noise
NASA Technical Reports Server (NTRS)
Dempsey, T. K.; Deloach, R.; Stephens, D. G.
1980-01-01
A study plan for quantifying the relationship between human annoyance and the noise level of individual aircraft events is studied. The validity of various noise descriptors or noise metrics for quantifying aircraft noise levels are assessed.
Poisson noise removal with pyramidal multi-scale transforms
NASA Astrophysics Data System (ADS)
Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M.
2013-09-01
In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume.
Hemakom, Apit; Goverdovsky, Valentin; Looney, David; Mandic, Danilo P
2016-04-13
An extension to multivariate empirical mode decomposition (MEMD), termed adaptive-projection intrinsically transformed MEMD (APIT-MEMD), is proposed to cater for power imbalances and inter-channel correlations in real-world multichannel data. It is shown that the APIT-MEMD exhibits similar or better performance than MEMD for a large number of projection vectors, whereas it outperforms MEMD for the critical case of a small number of projection vectors within the sifting algorithm. We also employ the noise-assisted APIT-MEMD within our proposed intrinsic multiscale analysis framework and illustrate the advantages of such an approach in notoriously noise-dominated cooperative brain-computer interface (BCI) based on the steady-state visual evoked potentials and the P300 responses. Finally, we show that for a joint cognitive BCI task, the proposed intrinsic multiscale analysis framework improves system performance in terms of the information transfer rate. © 2016 The Author(s).
Towards a Multiscale Approach to Cybersecurity Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay
2013-11-12
We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example ofmore » a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.« less
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen
2014-04-01
In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.
NASA Astrophysics Data System (ADS)
Azami, Hamed; Escudero, Javier
2017-01-01
Multiscale entropy (MSE) is an appealing tool to characterize the complexity of time series over multiple temporal scales. Recent developments in the field have tried to extend the MSE technique in different ways. Building on these trends, we propose the so-called refined composite multivariate multiscale fuzzy entropy (RCmvMFE) whose coarse-graining step uses variance (RCmvMFEσ2) or mean (RCmvMFEμ). We investigate the behavior of these multivariate methods on multichannel white Gaussian and 1/ f noise signals, and two publicly available biomedical recordings. Our simulations demonstrate that RCmvMFEσ2 and RCmvMFEμ lead to more stable results and are less sensitive to the signals' length in comparison with the other existing multivariate multiscale entropy-based methods. The classification results also show that using both the variance and mean in the coarse-graining step offers complexity profiles with complementary information for biomedical signal analysis. We also made freely available all the Matlab codes used in this paper.
Adaptive multiscale processing for contrast enhancement
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.
1993-07-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Refined generalized multiscale entropy analysis for physiological signals
NASA Astrophysics Data System (ADS)
Liu, Yunxiao; Lin, Youfang; Wang, Jing; Shang, Pengjian
2018-01-01
Multiscale entropy analysis has become a prevalent complexity measurement and been successfully applied in various fields. However, it only takes into account the information of mean values (first moment) in coarse-graining procedure. Then generalized multiscale entropy (MSEn) considering higher moments to coarse-grain a time series was proposed and MSEσ2 has been implemented. However, the MSEσ2 sometimes may yield an imprecise estimation of entropy or undefined entropy, and reduce statistical reliability of sample entropy estimation as scale factor increases. For this purpose, we developed the refined model, RMSEσ2, to improve MSEσ2. Simulations on both white noise and 1 / f noise show that RMSEσ2 provides higher entropy reliability and reduces the occurrence of undefined entropy, especially suitable for short time series. Besides, we discuss the effect on RMSEσ2 analysis from outliers, data loss and other concepts in signal processing. We apply the proposed model to evaluate the complexity of heartbeat interval time series derived from healthy young and elderly subjects, patients with congestive heart failure and patients with atrial fibrillation respectively, compared to several popular complexity metrics. The results demonstrate that RMSEσ2 measured complexity (a) decreases with aging and diseases, and (b) gives significant discrimination between different physiological/pathological states, which may facilitate clinical application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, Yi; Zhang, Xinyuan; Wang, Ningli, E-mail: wningli@vip.163.com, E-mail: puj@upmc.edu
2014-09-15
Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, wemore » tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.« less
Refined multiscale fuzzy entropy based on standard deviation for biomedical signal analysis.
Azami, Hamed; Fernández, Alberto; Escudero, Javier
2017-11-01
Multiscale entropy (MSE) has been a prevalent algorithm to quantify the complexity of biomedical time series. Recent developments in the field have tried to alleviate the problem of undefined MSE values for short signals. Moreover, there has been a recent interest in using other statistical moments than the mean, i.e., variance, in the coarse-graining step of the MSE. Building on these trends, here we introduce the so-called refined composite multiscale fuzzy entropy based on the standard deviation (RCMFE σ ) and mean (RCMFE μ ) to quantify the dynamical properties of spread and mean, respectively, over multiple time scales. We demonstrate the dependency of the RCMFE σ and RCMFE μ , in comparison with other multiscale approaches, on several straightforward signal processing concepts using a set of synthetic signals. The results evidenced that the RCMFE σ and RCMFE μ values are more stable and reliable than the classical multiscale entropy ones. We also inspect the ability of using the standard deviation as well as the mean in the coarse-graining process using magnetoencephalograms in Alzheimer's disease and publicly available electroencephalograms recorded from focal and non-focal areas in epilepsy. Our results indicated that when the RCMFE μ cannot distinguish different types of dynamics of a particular time series at some scale factors, the RCMFE σ may do so, and vice versa. The results showed that RCMFE σ -based features lead to higher classification accuracies in comparison with the RCMFE μ -based ones. We also made freely available all the Matlab codes used in this study at http://dx.doi.org/10.7488/ds/1477 .
Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A
2011-10-01
Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.
Edge enhancement and noise suppression for infrared image based on feature analysis
NASA Astrophysics Data System (ADS)
Jiang, Meng
2018-06-01
Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.
Multiscale entropy analysis of human gait dynamics
NASA Astrophysics Data System (ADS)
Costa, M.; Peng, C.-K.; L. Goldberger, Ary; Hausdorff, Jeffrey M.
2003-12-01
We compare the complexity of human gait time series from healthy subjects under different conditions. Using the recently developed multiscale entropy algorithm, which provides a way to measure complexity over a range of scales, we observe that normal spontaneous walking has the highest complexity when compared to slow and fast walking and also to walking paced by a metronome. These findings have implications for modeling locomotor control and for quantifying gait dynamics in physiologic and pathologic states.
Rippled Quasiperpendicular Shock Observed by the Magnetospheric Multiscale Spacecraft.
Johlander, A; Schwartz, S J; Vaivads, A; Khotyaintsev, Yu V; Gingell, I; Peng, I B; Markidis, S; Lindqvist, P-A; Ergun, R E; Marklund, G T; Plaschke, F; Magnes, W; Strangeway, R J; Russell, C T; Wei, H; Torbert, R B; Paterson, W R; Gershman, D J; Dorelli, J C; Avanov, L A; Lavraud, B; Saito, Y; Giles, B L; Pollock, C J; Burch, J L
2016-10-14
Collisionless shock nonstationarity arising from microscale physics influences shock structure and particle acceleration mechanisms. Nonstationarity has been difficult to quantify due to the small spatial and temporal scales. We use the closely spaced (subgyroscale), high-time-resolution measurements from one rapid crossing of Earth's quasiperpendicular bow shock by the Magnetospheric Multiscale (MMS) spacecraft to compare competing nonstationarity processes. Using MMS's high-cadence kinetic plasma measurements, we show that the shock exhibits nonstationarity in the form of ripples.
Rippled Quasiperpendicular Shock Observed by the Magnetospheric Multiscale Spacecraft
NASA Technical Reports Server (NTRS)
Johlander, A.; Schwartz, S. J.; Vaivads, A.; Khotyaintsev, Yu. V.; Gingell, I.; Peng, I. B.; Markidis, S.; Lindqvist, P.-A.; Ergun, R. E.; Marklund, G. T.;
2016-01-01
Collisionless shock nonstationarity arising from microscale physics influences shock structure and particle acceleration mechanisms. Nonstationarity has been difficult to quantify due to the small spatial and temporal scales. We use the closely spaced (subgyroscale), high-time-resolution measurements from one rapid crossing of Earths quasiperpendicular bow shock by the Magnetospheric Multiscale (MMS) spacecraft to compare competing nonstationarity processes. Using MMSs high-cadence kinetic plasma measurements, we show that the shock exhibits nonstationarity in the form of ripples.
First-passage times for pattern formation in nonlocal partial differential equations
NASA Astrophysics Data System (ADS)
Cáceres, Manuel O.; Fuentes, Miguel A.
2015-10-01
We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.
First-passage times for pattern formation in nonlocal partial differential equations.
Cáceres, Manuel O; Fuentes, Miguel A
2015-10-01
We describe the lifetimes associated with the stochastic evolution from an unstable uniform state to a patterned one when the time evolution of the field is controlled by a nonlocal Fisher equation. A small noise is added to the evolution equation to define the lifetimes and to calculate the mean first-passage time of the stochastic field through a given threshold value, before the patterned steady state is reached. In order to obtain analytical results we introduce a stochastic multiscale perturbation expansion. This multiscale expansion can also be used to tackle multiplicative stochastic partial differential equations. A critical slowing down is predicted for the marginal case when the Fourier phase of the unstable initial condition is null. We carry out Monte Carlo simulations to show the agreement with our theoretical predictions. Analytic results for the bifurcation point and asymptotic analysis of traveling wave-front solutions are included to get insight into the noise-induced transition phenomena mediated by invading fronts.
NASA Astrophysics Data System (ADS)
Gottwald, Georg; Melbourne, Ian
2013-04-01
Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.
NASA Astrophysics Data System (ADS)
Cesar, Roberto Marcondes; Costa, Luciano da Fontoura
1997-05-01
The estimation of the curvature of experimentally obtained curves is an important issue in many applications of image analysis including biophysics, biology, particle physics, and high energy physics. However, the accurate calculation of the curvature of digital contours has proven to be a difficult endeavor, mainly because of the noise and distortions that are always present in sampled signals. Errors ranging from 1% to 1000% have been reported with respect to the application of standard techniques in the estimation of the curvature of circular contours [M. Worring and A. W. M. Smeulders, CVGIP: Im. Understanding, 58, 366 (1993)]. This article explains how diagrams of multiscale bending energy can be easily obtained from curvegrams and used as a robust general feature for morphometric characterization of neural cells. The bending energy is an interesting global feature for shape characterization that expresses the amount of energy needed to transform the specific shape under analysis into its lowest energy state (i.e., a circle). The curvegram, which can be accurately obtained by using digital signal processing techniques (more specifically through the Fourier transform and its inverse, as described in this work), provides multiscale representation of the curvature of digital contours. The estimation of the bending energy from the curvegram is introduced and exemplified with respect to a series of neural cells. The masked high curvature effect is reported and its implications to shape analysis are discussed. It is also discussed and illustrated that, by normalizing the multiscale bending energy with respect to a standard circle of unitary perimeter, this feature becomes an effective means for expressing shape complexity in a way that is invariant to rotation, translation, and scaling, and that is robust to noise and other artifacts implied by image acquisition.
Nonlocal variational model and filter algorithm to remove multiplicative noise
NASA Astrophysics Data System (ADS)
Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi
2010-07-01
The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.
Iterative Self-Dual Reconstruction on Radar Image Recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Charles; Medeiros, Fatima; Ushizima, Daniela
2010-05-21
Imaging systems as ultrasound, sonar, laser and synthetic aperture radar (SAR) are subjected to speckle noise during image acquisition. Before analyzing these images, it is often necessary to remove the speckle noise using filters. We combine properties of two mathematical morphology filters with speckle statistics to propose a signal-dependent noise filter to multiplicative noise. We describe a multiscale scheme that preserves sharp edges while it smooths homogeneous areas, by combining local statistics with two mathematical morphology filters: the alternating sequential and the self-dual reconstruction algorithms. The experimental results show that the proposed approach is less sensitive to varying window sizesmore » when applied to simulated and real SAR images in comparison with standard filters.« less
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-04-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a Critical Filtration Size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-10-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a critical filtration size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
Infrared small target detection based on multiscale center-surround contrast measure
NASA Astrophysics Data System (ADS)
Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei
2018-04-01
Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
NASA Astrophysics Data System (ADS)
Yin, Yi; Shang, Pengjian
2013-12-01
We use multiscale detrended fluctuation analysis (MSDFA) and multiscale detrended cross-correlation analysis (MSDCCA) to investigate auto-correlation (AC) and cross-correlation (CC) in the US and Chinese stock markets during 1997-2012. The results show that US and Chinese stock indices differ in terms of their multiscale AC structures. Stock indices in the same region also differ with regard to their multiscale AC structures. We analyze AC and CC behaviors among indices for the same region to determine similarity among six stock indices and divide them into four groups accordingly. We choose S&P500, NQCI, HSI, and the Shanghai Composite Index as representative samples for simplicity. MSDFA and MSDCCA results and average MSDFA spectra for local scaling exponents (LSEs) for individual series are presented. We find that the MSDCCA spectrum for LSE CC between two time series generally tends to be greater than the average MSDFA LSE spectrum for individual series. We obtain detailed multiscale structures and relations for CC between the four representatives. MSDFA and MSDCCA with secant rolling windows of different sizes are then applied to reanalyze the AC and CC. Vertical and horizontal comparisons of different window sizes are made. The MSDFA and MSDCCA results for the original window size are confirmed and some new interesting characteristics and conclusions regarding multiscale correlation structures are obtained.
Landscape variation of seasonal pool plant communities in forests of northern Minnesota, USA
Brian Palik; Dwight Streblow; Leanne Egeland; Richard Buech
2007-01-01
Seasonal forest pools are abundant in the northern Great Lakes forest landscape, but the range of variation in their plant communities and the relationship of this variation to multi-scale landscape features remains poorly quantified. We examined seasonal pools in forests of northern Minnesota USA with the objective of quantifying the range of variation in plant...
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring
NASA Astrophysics Data System (ADS)
Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin
2015-08-01
In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.
Stacked competitive networks for noise reduction in low-dose CT
Du, Wenchao; Chen, Hu; Wu, Zhihong; Sun, Huaiqiang; Liao, Peixi
2017-01-01
Since absorption of X-ray radiation has the possibility of inducing cancerous, genetic and other diseases to patients, researches usually attempt to reduce the radiation dose. However, reduction of the radiation dose associated with CT scans will unavoidably increase the severity of noise and artifacts, which can seriously affect diagnostic confidence. Due to the outstanding performance of deep neural networks in image processing, in this paper, we proposed a Stacked Competitive Network (SCN) approach to noise reduction, which stacks several successive Competitive Blocks (CB). The carefully handcrafted design of the competitive blocks was inspired by the idea of multi-scale processing and improvement the network’s capacity. Qualitative and quantitative evaluations demonstrate the competitive performance of the proposed method in noise suppression, structural preservation, and lesion detection. PMID:29267360
NASA Astrophysics Data System (ADS)
Lao, Zhiqiang; Zheng, Xin
2011-03-01
This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram and two features are then extracted from the feature map. These two features will be combined with other mass features to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion. The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to over 500 cancer cases and over 1000 normal cases.
NASA Technical Reports Server (NTRS)
Hadden, C. M.; Klimek-McDonald, D. R.; Pineda, E. J.; King, J. A.; Reichanadter, A. M.; Miskioglu, I.; Gowtham, S.; Odegard, G. M.
2015-01-01
Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite while the effect on the axial properties is shown to be insignificant.
NASA Technical Reports Server (NTRS)
Hadden, C. M.; Klimek-McDonald, D. R.; Pineda, E. J.; King, J. A.; Reichanadter, A. M.; Miskioglu, I.; Gowtham, S.; Odegard, G. M.
2015-01-01
Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.
NASA Technical Reports Server (NTRS)
Hadden, Cameron M.; Klimek-McDonald, Danielle R.; Pineda, Evan J.; King, Julie A.; Reichanadter, Alex M.; Miskioglu, Ibrahim; Gowtham, S.; Odegard, Gregory M.
2015-01-01
Because of the relatively high specific mechanical properties of carbon fiber/epoxy composite materials, they are often used as structural components in aerospace applications. Graphene nanoplatelets (GNPs) can be added to the epoxy matrix to improve the overall mechanical properties of the composite. The resulting GNP/carbon fiber/epoxy hybrid composites have been studied using multiscale modeling to determine the influence of GNP volume fraction, epoxy crosslink density, and GNP dispersion on the mechanical performance. The hierarchical multiscale modeling approach developed herein includes Molecular Dynamics (MD) and micromechanical modeling, and it is validated with experimental testing of the same hybrid composite material system. The results indicate that the multiscale modeling approach is accurate and provides physical insight into the composite mechanical behavior. Also, the results quantify the substantial impact of GNP volume fraction and dispersion on the transverse mechanical properties of the hybrid composite, while the effect on the axial properties is shown to be insignificant.
Feature Visibility Limits in the Non-Linear Enhancement of Turbid Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.
2003-01-01
The advancement of non-linear processing methods for generic automatic clarification of turbid imagery has led us from extensions of entirely passive multiscale Retinex processing to a new framework of active measurement and control of the enhancement process called the Visual Servo. In the process of testing this new non-linear computational scheme, we have identified that feature visibility limits in the post-enhancement image now simplify to a single signal-to-noise figure of merit: a feature is visible if the feature-background signal difference is greater than the RMS noise level. In other words, a signal-to-noise limit of approximately unity constitutes a lower limit on feature visibility.
Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A
2015-07-01
Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions. Copyright © 2015. Published by Elsevier B.V.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2012-10-01
A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).
NASA Astrophysics Data System (ADS)
Laleian, A.; Valocchi, A. J.; Werth, C. J.
2017-12-01
Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.
Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo
2016-06-01
Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
Refined composite multiscale weighted-permutation entropy of financial time series
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian
2018-04-01
For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.
Poisson denoising on the sphere
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.
2009-08-01
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Steve; Holland, Marika
The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Measurement of hearing aid internal noise1
Lewis, James D.; Goodman, Shawn S.; Bentler, Ruth A.
2010-01-01
Hearing aid equivalent input noise (EIN) measures assume the primary source of internal noise to be located prior to amplification and to be constant regardless of input level. EIN will underestimate internal noise in the case that noise is generated following amplification. The present study investigated the internal noise levels of six hearing aids (HAs). Concurrent with HA processing of a speech-like stimulus with both adaptive features (acoustic feedback cancellation, digital noise reduction, microphone directionality) enabled and disabled, internal noise was quantified for various stimulus levels as the variance across repeated trials. Changes in noise level as a function of stimulus level demonstrated that (1) generation of internal noise is not isolated to the microphone, (2) noise may be dependent on input level, and (3) certain adaptive features may contribute to internal noise. Quantifying internal noise as the variance of the output measures allows for noise to be measured under real-world processing conditions, accounts for all sources of noise, and is predictive of internal noise audibility. PMID:20370034
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Multiscale analysis of information dynamics for linear multivariate processes.
Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele
2016-08-01
In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheibe, Timothy D.; Murphy, Ellyn M.; Chen, Xingyuan
2015-01-01
One of the most significant challenges facing hydrogeologic modelers is the disparity between those spatial and temporal scales at which fundamental flow, transport and reaction processes can best be understood and quantified (e.g., microscopic to pore scales, seconds to days) and those at which practical model predictions are needed (e.g., plume to aquifer scales, years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computational and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that modelmore » parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this paper, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flow chart (Multiscale Analysis Platform or MAP), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and may become a viable alternative to conventional single-scale models in the near future.« less
Scheibe, Timothy D; Murphy, Ellyn M; Chen, Xingyuan; Rice, Amy K; Carroll, Kenneth C; Palmer, Bruce J; Tartakovsky, Alexandre M; Battiato, Ilenia; Wood, Brian D
2015-01-01
One of the most significant challenges faced by hydrogeologic modelers is the disparity between the spatial and temporal scales at which fundamental flow, transport, and reaction processes can best be understood and quantified (e.g., microscopic to pore scales and seconds to days) and at which practical model predictions are needed (e.g., plume to aquifer scales and years to centuries). While the multiscale nature of hydrogeologic problems is widely recognized, technological limitations in computation and characterization restrict most practical modeling efforts to fairly coarse representations of heterogeneous properties and processes. For some modern problems, the necessary level of simplification is such that model parameters may lose physical meaning and model predictive ability is questionable for any conditions other than those to which the model was calibrated. Recently, there has been broad interest across a wide range of scientific and engineering disciplines in simulation approaches that more rigorously account for the multiscale nature of systems of interest. In this article, we review a number of such approaches and propose a classification scheme for defining different types of multiscale simulation methods and those classes of problems to which they are most applicable. Our classification scheme is presented in terms of a flowchart (Multiscale Analysis Platform), and defines several different motifs of multiscale simulation. Within each motif, the member methods are reviewed and example applications are discussed. We focus attention on hybrid multiscale methods, in which two or more models with different physics described at fundamentally different scales are directly coupled within a single simulation. Very recently these methods have begun to be applied to groundwater flow and transport simulations, and we discuss these applications in the context of our classification scheme. As computational and characterization capabilities continue to improve, we envision that hybrid multiscale modeling will become more common and also a viable alternative to conventional single-scale models in the near future. © 2014, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Understanding the amplitudes of noise correlation measurements
Tsai, Victor C.
2011-01-01
Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
Shock waves simulated using the dual domain material point method combined with molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Duan Z.; Dhakal, Tilak Raj
Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less
Shock waves simulated using the dual domain material point method combined with molecular dynamics
Zhang, Duan Z.; Dhakal, Tilak Raj
2017-01-17
Here in this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region,more » such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. In conclusion, to demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.« less
Hybrid stochastic simplifications for multiscale gene networks.
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-09-07
Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costigan, Keeley Rochelle; Sauer, Jeremy A.; Travis, Bryan J.
2016-07-18
This slide deals with the following: Affordable artificial neural network and mini-sensor system to locate and quantify methane leaks on a well pad; ARPA-e project schematic for monitoring methane leaks
Median Robust Extended Local Binary Pattern for Texture Classification.
Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti
2016-03-01
Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.
Real-Time Nonlocal Means-Based Despeckling.
Breivik, Lars Hofsoy; Snare, Sten Roar; Steen, Erik Normann; Solberg, Anne H Schistad
2017-06-01
In this paper, we propose a multiscale nonlocal means-based despeckling method for medical ultrasound. The multiscale approach leads to large computational savings and improves despeckling results over single-scale iterative approaches. We present two variants of the method. The first, denoted multiscale nonlocal means (MNLM), yields uniform robust filtering of speckle both in structured and homogeneous regions. The second, denoted unnormalized MNLM (UMNLM), is more conservative in regions of structure assuring minimal disruption of salient image details. Due to the popularity of anisotropic diffusion-based methods in the despeckling literature, we review the connection between anisotropic diffusion and iterative variants of NLM. These iterative variants in turn relate to our multiscale variant. As part of our evaluation, we conduct a simulation study making use of ground truth phantoms generated from clinical B-mode ultrasound images. We evaluate our method against a set of popular methods from the despeckling literature on both fine and coarse speckle noise. In terms of computational efficiency, our method outperforms the other considered methods. Quantitatively on simulations and on a tissue-mimicking phantom, our method is found to be competitive with the state-of-the-art. On clinical B-mode images, our method is found to effectively smooth speckle while preserving low-contrast and highly localized salient image detail.
Chokhandre, Snehal; Colbrunn, Robb; Bennetts, Craig; Erdemir, Ahmet
2015-01-01
Understanding of tibiofemoral joint mechanics at multiple spatial scales is essential for developing effective preventive measures and treatments for both pathology and injury management. Currently, there is a distinct lack of specimen-specific biomechanical data at multiple spatial scales, e.g., joint, tissue, and cell scales. Comprehensive multiscale data may improve the understanding of the relationship between biomechanical and anatomical markers across various scales. Furthermore, specimen-specific multiscale data for the tibiofemoral joint may assist development and validation of specimen-specific computational models that may be useful for more thorough analyses of the biomechanical behavior of the joint. This study describes an aggregation of procedures for acquisition of multiscale anatomical and biomechanical data for the tibiofemoral joint. Magnetic resonance imaging was used to acquire anatomical morphology at the joint scale. A robotic testing system was used to quantify joint level biomechanical response under various loading scenarios. Tissue level material properties were obtained from the same specimen for the femoral and tibial articular cartilage, medial and lateral menisci, anterior and posterior cruciate ligaments, and medial and lateral collateral ligaments. Histology data were also obtained for all tissue types to measure specimen-specific cell scale information, e.g., cellular distribution. This study is the first of its kind to establish a comprehensive multiscale data set for a musculoskeletal joint and the presented data collection approach can be used as a general template to guide acquisition of specimen-specific comprehensive multiscale data for musculoskeletal joints. PMID:26381404
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
NASA Astrophysics Data System (ADS)
He, Shaobo; Banerjee, Santo
2018-07-01
A fractional-order SIR epidemic model is proposed under the influence of both parametric seasonality and the external noise. The integer order SIR epidemic model originally is stable. By introducing seasonality and noise force to the model, behaviors of the system is changed. It is shown that the system has rich dynamical behaviors with different system parameters, fractional derivative order and the degree of seasonality and noise. Complexity of the stochastic model is investigated by using multi-scale fuzzy entropy. Finally, hard limiter controlled system is designed and simulation results show the ratio of infected individuals can converge to a small enough target ρ, which means the epidemic outbreak can be under control by the implementation of some effective medical and health measures.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Navy Training Lands Sustainability: Initiation Decision Report
2004-06-01
Health Promotion and Preventative Medicine CLF Combat Logistics Force CMAQ Community Multiscale Air Quality CNEL Community Noise Equivalent Level...Environmental Consequences of Underwater Sound EDQW (DoD) Environmental Data Quality Workgroup EEZ Exclusive Economic Zone EFHA Essential Fish Habitat...Conservation Management Act established a 200-mile fishery conservation zone, which is now known as the Exclusive Economic Zone (EEZ), and established
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng
2016-06-24
Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.
A speeded-up saliency region-based contrast detection method for small targets
NASA Astrophysics Data System (ADS)
Li, Zhengjie; Zhang, Haiying; Bai, Jiaojiao; Zhou, Zhongjun; Zheng, Huihuang
2018-04-01
To cope with the rapid development of the real applications for infrared small targets, the researchers have tried their best to pursue more robust detection methods. At present, the contrast measure-based method has become a promising research branch. Following the framework, in this paper, a speeded-up contrast measure scheme is proposed based on the saliency detection and density clustering. First, the saliency region is segmented by saliency detection method, and then, the Multi-scale contrast calculation is carried out on it instead of traversing the whole image. Second, the target with a certain "integrity" property in spatial is exploited to distinguish the target from the isolated noises by density clustering. Finally, the targets are detected by a self-adaptation threshold. Compared with time-consuming MPCM (Multiscale Patch Contrast Map), the time cost of the speeded-up version is within a few seconds. Additional, due to the use of "clustering segmentation", the false alarm caused by heavy noises can be restrained to a lower level. The experiments show that our method has a satisfied FASR (False alarm suppression ratio) and real-time performance compared with the state-of-art algorithms no matter in cloudy sky or sea-sky background.
Segmentation of white rat sperm image
NASA Astrophysics Data System (ADS)
Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan
2011-11-01
The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.
Multiscale peak detection in wavelet space.
Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng
2015-12-07
Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
Multiscale recurrence analysis of spatio-temporal data
NASA Astrophysics Data System (ADS)
Riedl, M.; Marwan, N.; Kurths, J.
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Multiscale recurrence analysis of spatio-temporal data.
Riedl, M; Marwan, N; Kurths, J
2015-12-01
The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.
Weighted multiscale Rényi permutation entropy of nonlinear time series
NASA Astrophysics Data System (ADS)
Chen, Shijian; Shang, Pengjian; Wu, Yue
2018-04-01
In this paper, based on Rényi permutation entropy (RPE), which has been recently suggested as a relative measure of complexity in nonlinear systems, we propose multiscale Rényi permutation entropy (MRPE) and weighted multiscale Rényi permutation entropy (WMRPE) to quantify the complexity of nonlinear time series over multiple time scales. First, we apply MPRE and WMPRE to the synthetic data and make a comparison of modified methods and RPE. Meanwhile, the influence of the change of parameters is discussed. Besides, we interpret the necessity of considering not only multiscale but also weight by taking the amplitude into account. Then MRPE and WMRPE methods are employed to the closing prices of financial stock markets from different areas. By observing the curves of WMRPE and analyzing the common statistics, stock markets are divided into 4 groups: (1) DJI, S&P500, and HSI, (2) NASDAQ and FTSE100, (3) DAX40 and CAC40, and (4) ShangZheng and ShenCheng. Results show that the standard deviations of weighted methods are smaller, showing WMRPE is able to ensure the results more robust. Besides, WMPRE can provide abundant dynamical properties of complex systems, and demonstrate the intrinsic mechanism.
Information-Theoretical Quantifier of Brain Rhythm Based on Data-Driven Multiscale Representation
2015-01-01
This paper presents a data-driven multiscale entropy measure to reveal the scale dependent information quantity of electroencephalogram (EEG) recordings. This work is motivated by the previous observations on the nonlinear and nonstationary nature of EEG over multiple time scales. Here, a new framework of entropy measures considering changing dynamics over multiple oscillatory scales is presented. First, to deal with nonstationarity over multiple scales, EEG recording is decomposed by applying the empirical mode decomposition (EMD) which is known to be effective for extracting the constituent narrowband components without a predetermined basis. Following calculation of Renyi entropy of the probability distributions of the intrinsic mode functions extracted by EMD leads to a data-driven multiscale Renyi entropy. To validate the performance of the proposed entropy measure, actual EEG recordings from rats (n = 9) experiencing 7 min cardiac arrest followed by resuscitation were analyzed. Simulation and experimental results demonstrate that the use of the multiscale Renyi entropy leads to better discriminative capability of the injury levels and improved correlations with the neurological deficit evaluation after 72 hours after cardiac arrest, thus suggesting an effective diagnostic and prognostic tool. PMID:26380297
Freezing does not alter multiscale tendon mechanics and damage mechanisms in tension.
Lee, Andrea H; Elliott, Dawn M
2017-12-01
It is common in biomechanics to use previously frozen tissues, where it is assumed that the freeze-thaw process does not cause consequential mechanical or structural changes. We have recently quantified multiscale tendon mechanics and damage mechanisms using previously frozen tissue, where damage was defined as an irreversible change in the microstructure that alters the macroscopic mechanical parameters. Because freezing has been shown to alter tendon microstructures, the objective of this study was to determine if freezing alters tendon multiscale mechanics and damage mechanisms. Multiscale testing using a protocol that was designed to evaluate tendon damage (tensile stress-relaxation followed by unloaded recovery) was performed on fresh and previously frozen rat tail tendon fascicles. At both the fascicle and fibril levels, there was no difference between the fresh and frozen groups for any of the parameters, suggesting that there is no effect of freezing on tendon mechanics. After unloading, the microscale fibril strain fully recovered, and interfibrillar sliding only partially recovered, suggesting that the tendon damage is localized to the interfibrillar structures and that mechanisms of damage are the same in both fresh and previously frozen tendons. © 2017 New York Academy of Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
An algorithm for pavement crack detection based on multiscale space
NASA Astrophysics Data System (ADS)
Liu, Xiang-long; Li, Qing-quan
2006-10-01
Conventional human-visual and manual field pavement crack detection method and approaches are very costly, time-consuming, dangerous, labor-intensive and subjective. They possess various drawbacks such as having a high degree of variability of the measure results, being unable to provide meaningful quantitative information and almost always leading to inconsistencies in crack details over space and across evaluation, and with long-periodic measurement. With the development of the public transportation and the growth of the Material Flow System, the conventional method can far from meet the demands of it, thereby, the automatic pavement state data gathering and data analyzing system come to the focus of the vocation's attention, and developments in computer technology, digital image acquisition, image processing and multi-sensors technology made the system possible, but the complexity of the image processing always made the data processing and data analyzing come to the bottle-neck of the whole system. According to the above description, a robust and high-efficient parallel pavement crack detection algorithm based on Multi-Scale Space is proposed in this paper. The proposed method is based on the facts that: (1) the crack pixels in pavement images are darker than their surroundings and continuous; (2) the threshold values of gray-level pavement images are strongly related with the mean value and standard deviation of the pixel-grey intensities. The Multi-Scale Space method is used to improve the data processing speed and minimize the effectiveness caused by image noise. Experiment results demonstrate that the advantages are remarkable: (1) it can correctly discover tiny cracks, even from very noise pavement image; (2) the efficiency and accuracy of the proposed algorithm are superior; (3) its application-dependent nature can simplify the design of the entire system.
Hybrid stochastic simplifications for multiscale gene networks
Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu
2009-01-01
Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554
Multi-Scale Multi-Domain Model | Transportation Research | NREL
framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework
A multi-scale modelling procedure to quantify hydrological impacts of upland land management
NASA Astrophysics Data System (ADS)
Wheater, H. S.; Jackson, B.; Bulygina, N.; Ballard, C.; McIntyre, N.; Marshall, M.; Frogbrook, Z.; Solloway, I.; Reynolds, B.
2008-12-01
Recent UK floods have focused attention on the effects of agricultural intensification on flood risk. However, quantification of these effects raises important methodological issues. Catchment-scale data have proved inadequate to support analysis of impacts of land management change, due to climate variability, uncertainty in input and output data, spatial heterogeneity in land use and lack of data to quantify historical changes in management practices. Manipulation experiments to quantify the impacts of land management change have necessarily been limited and small scale, and in the UK mainly focused on the lowlands and arable agriculture. There is a need to develop methods to extrapolate from small scale observations to predict catchment-scale response, and to quantify impacts for upland areas. With assistance from a cooperative of Welsh farmers, a multi-scale experimental programme has been established at Pontbren, in mid-Wales, an area of intensive sheep production. The data have been used to support development of a multi-scale modelling methodology to assess impacts of agricultural intensification and the potential for mitigation of flood risk through land use management. Data are available from replicated experimental plots under different land management treatments, from instrumented field and hillslope sites, including tree shelter belts, and from first and second order catchments. Measurements include climate variables, soil water states and hydraulic properties at multiple depths and locations, tree interception, overland flow and drainflow, groundwater levels, and streamflow from multiple locations. Fine resolution physics-based models have been developed to represent soil and runoff processes, conditioned using experimental data. The detailed models are used to calibrate simpler 'meta- models' to represent individual hydrological elements, which are then combined in a semi-distributed catchment-scale model. The methodology is illustrated using field and catchment-scale simulations to demonstrate the the response of improved and unimproved grassland, and the potential effects of land management interventions, including farm ponds, tree shelter belts and buffer strips. It is concluded that the methodology developed has the potential to represent and quantify catchment-scale effects of upland management; continuing research is extending the work to a wider range of upland environments and land use types, with the aim of providing generic simulation tools that can be used to provide strategic policy guidance.
NASA Astrophysics Data System (ADS)
Lin, Aijing; Shang, Pengjian
2016-04-01
Considering the diverse application of multifractal techniques in natural scientific disciplines, this work underscores the versatility of multiscale multifractal detrended fluctuation analysis (MMA) method to investigate artificial and real-world data sets. The modified MMA method based on cumulative distribution function is proposed with the objective of quantifying the scaling exponent and multifractality of nonstationary time series. It is demonstrated that our approach can provide a more stable and faithful description of multifractal properties in comprehensive range rather than fixing the window length and slide length. Our analyzes based on CDF-MMA method reveal significant differences in the multifractal characteristics in the temporal dynamics between US and Chinese stock markets, suggesting that these two stock markets might be regulated by very different mechanism. The CDF-MMA method is important for evidencing the stable and fine structure of multiscale and multifractal scaling behaviors and can be useful to deepen and broaden our understanding of scaling exponents and multifractal characteristics.
Adaptive threshold shearlet transform for surface microseismic data denoising
NASA Astrophysics Data System (ADS)
Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan
2018-06-01
Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.
Maneuver Recovery Analysis for the Magnetospheric Multiscale Mission
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, Russell; Volle, Michael; Lee, Taesul; Long, Anne
2007-01-01
The use of spacecraft formations creates new and more demanding requirements for orbit determination accuracy. In addition to absolute navigation requirements, there are typically relative navigation requirements that are based on the size or shape of the formation. The difficulty in meeting these requirements is related to the relative dynamics of the spacecraft orbits and the frequency of the formation maintenance maneuvers. This paper examines the effects of bi-weekly formation maintenance maneuvers on the absolute and relative orbit determination accuracy for the four-spacecraft Magnetospheric Multiscale (MMS) formation. Results are presented from high fidelity simulations that include the effects of realistic orbit determination errors in the maneuver planning process. Solutions are determined using a high accuracy extended Kalman filter designed for onboard navigation. Three different solutions are examined, considering the effects of process noise and measurement rate on the solutions.
Removal of bone in CT angiography by multiscale matched mask bone elimination.
Gratama van Andel, H A F; Venema, H W; Streekstra, G J; van Straten, M; Majoie, C B L M; den Heeten, G J; Grimbergen, C A
2007-10-01
For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed for use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.
The impact of climate change on surface-level ozone is examined through a multiscale modeling effort that linked global and regional climate models to drive air quality model simulations. Results are quantified in terms of the relative response factor (RRFE), which estimates the ...
NASA Astrophysics Data System (ADS)
Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale
2013-04-01
A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.
Multiscale Feature Analysis of Salivary Gland Branching Morphogenesis
Baydil, Banu; Daley, William P.; Larsen, Melinda; Yener, Bülent
2012-01-01
Pattern formation in developing tissues involves dynamic spatio-temporal changes in cellular organization and subsequent evolution of functional adult structures. Branching morphogenesis is a developmental mechanism by which patterns are generated in many developing organs, which is controlled by underlying molecular pathways. Understanding the relationship between molecular signaling, cellular behavior and resulting morphological change requires quantification and categorization of the cellular behavior. In this study, tissue-level and cellular changes in developing salivary gland in response to disruption of ROCK-mediated signaling by are modeled by building cell-graphs to compute mathematical features capturing structural properties at multiple scales. These features were used to generate multiscale cell-graph signatures of untreated and ROCK signaling disrupted salivary gland organ explants. From confocal images of mouse submandibular salivary gland organ explants in which epithelial and mesenchymal nuclei were marked, a multiscale feature set capturing global structural properties, local structural properties, spectral, and morphological properties of the tissues was derived. Six feature selection algorithms and multiway modeling of the data was performed to identify distinct subsets of cell graph features that can uniquely classify and differentiate between different cell populations. Multiscale cell-graph analysis was most effective in classification of the tissue state. Cellular and tissue organization, as defined by a multiscale subset of cell-graph features, are both quantitatively distinct in epithelial and mesenchymal cell types both in the presence and absence of ROCK inhibitors. Whereas tensor analysis demonstrate that epithelial tissue was affected the most by inhibition of ROCK signaling, significant multiscale changes in mesenchymal tissue organization were identified with this analysis that were not identified in previous biological studies. We here show how to define and calculate a multiscale feature set as an effective computational approach to identify and quantify changes at multiple biological scales and to distinguish between different states in developing tissues. PMID:22403724
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Noise impact on wildlife: An environmental impact assessment
NASA Technical Reports Server (NTRS)
Bender, A.
1977-01-01
Various biological effects of noise on animals are discussed and a systematic approach for an impact assessment is developed. Further research is suggested to fully quantify noise impact on the species and its ecosystem.
NASA Astrophysics Data System (ADS)
Bonetti, Rita M.; Reinfelds, Ivars V.; Butler, Gavin L.; Walsh, Chris T.; Broderick, Tony J.; Chisholm, Laurie A.
2016-05-01
Natural barriers such as waterfalls, cascades, rapids and riffles limit the dispersal and in-stream range of migratory fish, yet little is known of the interplay between these gradient dependent landforms, their hydraulic characteristics and flow rates that facilitate fish passage. The resurgence of dam construction in numerous river basins world-wide provides impetus to the development of robust techniques for assessment of the effects of downstream flow regime changes on natural fish passage barriers and associated consequences as to the length of rivers available to migratory species. This paper outlines a multi-scale technique for quantifying the relative magnitude of natural fish passage barriers in river systems and flow rates that facilitate passage by fish. First, a GIS-based approach is used to quantify channel gradients for the length of river or reach under investigation from a high resolution DEM, setting the magnitude of identified passage barriers in a longer context (tens to hundreds of km). Second, LiDAR, topographic and bathymetric survey-based hydrodynamic modelling is used to assess flow rates that can be regarded as facilitating passage across specific barriers identified by the river to reach scale gradient analysis. Examples of multi-scale approaches to fish passage assessment for flood-flow and low-flow passage issues are provided from the Clarence and Shoalhaven Rivers, NSW, Australia. In these river systems, passive acoustic telemetry data on actual movements and migrations by Australian bass (Macquaria novemaculeata) provide a means of validating modelled assessments of flow rates associated with successful fish passage across natural barriers. Analysis of actual fish movements across passage barriers in these river systems indicates that two dimensional hydraulic modelling can usefully quantify flow rates associated with the facilitation of fish passage across natural barriers by a majority of individual fishes for use in management decisions regarding environmental or instream flows.
A Multiscale Vision Model applied to analyze EIT images of the solar corona
NASA Astrophysics Data System (ADS)
Portier-Fozzani, F.; Vandame, B.; Bijaoui, A.; Maucherat, A. J.; EIT Team
2001-07-01
The large dynamic range provided by the SOHO/EIT CCD (1 : 5000) is needed to observe the large EUV zoom of coronal structures from coronal homes up to flares. Histograms show that often a wide dynamic range is present in each image. Extracting hidden structures in the background level requires specific techniques such as the use of the Multiscale Vision Model (MVM, Bijaoui et al., 1998). This method, based on wavelet transformations optimizes detection of various size objects, however complex they may be. Bijaoui et al. built the Multiscale Vision Model to extract small dynamical structures from noise, mainly for studying galaxies. In this paper, we describe requirements for the use of this method with SOHO/EIT images (calibration, size of the image, dynamics of the subimage, etc.). Two different areas were studied revealing hidden structures: (1) classical coronal mass ejection (CME) formation and (2) a complex group of active regions with its evolution. The aim of this paper is to define carefully the constraints for this new method of imaging the solar corona with SOHO/EIT. Physical analysis derived from multi-wavelength observations will later complete these first results.
Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong
2015-01-01
Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748
NASA Astrophysics Data System (ADS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José; Liu, Qinya; Zhou, Bing
2017-05-01
We carry out full waveform inversion (FWI) in time domain based on an alternative frequency-band selection strategy that allows us to implement the method with success. This strategy aims at decomposing the seismic data within partially overlapped frequency intervals by carrying out a concatenated treatment of the wavelet to largely avoid redundant frequency information to adapt to wavelength or wavenumber coverage. A pertinent numerical test proves the effectiveness of this strategy. Based on this strategy, we comparatively analyze the effects of update parameters for the nonlinear conjugate gradient (CG) method and step-length formulas on the multiscale FWI through several numerical tests. The investigations of up to eight versions of the nonlinear CG method with and without Gaussian white noise make clear that the HS (Hestenes and Stiefel in J Res Natl Bur Stand Sect 5:409-436, 1952), CD (Fletcher in Practical methods of optimization vol. 1: unconstrained optimization, Wiley, New York, 1987), and PRP (Polak and Ribière in Revue Francaise Informat Recherche Opertionelle, 3e Année 16:35-43, 1969; Polyak in USSR Comput Math Math Phys 9:94-112, 1969) versions are more efficient among the eight versions, while the DY (Dai and Yuan in SIAM J Optim 10:177-182, 1999) version always yields inaccurate result, because it overestimates the deeper parts of the model. The application of FWI algorithms using distinct step-length formulas, such as the direct method ( Direct), the parabolic search method ( Search), and the two-point quadratic interpolation method ( Interp), proves that the Interp is more efficient for noise-free data, while the Direct is more efficient for Gaussian white noise data. In contrast, the Search is less efficient because of its slow convergence. In general, the three step-length formulas are robust or partly insensitive to Gaussian white noise and the complexity of the model. When the initial velocity model deviates far from the real model or the data are contaminated by noise, the objective function values of the Direct and Interp are oscillating at the beginning of the inversion, whereas that of the Search decreases consistently.
Chen, Xiaoling; Xie, Ping; Zhang, Yuanyuan; Chen, Yuling; Yang, Fangmei; Zhang, Litai; Li, Xiaoli
2018-01-01
Recently, functional corticomuscular coupling (FCMC) between the cortex and the contralateral muscle has been used to evaluate motor function after stroke. As we know, the motor-control system is a closed-loop system that is regulated by complex self-regulating and interactive mechanisms which operate in multiple spatial and temporal scales. Multiscale analysis can represent the inherent complexity. However, previous studies in FCMC for stroke patients mainly focused on the coupling strength in single-time scale, without considering the changes of the inherently directional and multiscale properties in sensorimotor systems. In this paper, a multiscale-causal model, named multiscale transfer entropy, was used to quantify the functional connection between electroencephalogram over the scalp and electromyogram from the flexor digitorum superficialis (FDS) recorded simultaneously during steady-state grip task in eight stroke patients and eight healthy controls. Our results showed that healthy controls exhibited higher coupling when the scale reached up to about 12, and the FCMC in descending direction was stronger at certain scales (1, 7, 12, and 14) than that in ascending direction. Further analysis showed these multi-time scale characteristics mainly focused on the beta1 band at scale 11 and beta2 band at scale 9, 11, 13, and 15. Compared to controls, the multiscale properties of the FCMC for stroke were changed, the strengths in both directions were reduced, and the gaps between the descending and ascending directions were disappeared over all scales. Further analysis in specific bands showed that the reduced FCMC mainly focused on the alpha2 at higher scale, beta1 and beta2 across almost the entire scales. This study about multi-scale confirms that the FCMC between the brain and muscles is capable of complex and directional characteristics, and these characteristics in functional connection for stroke are destroyed by the structural lesion in the brain that might disrupt coordination, feedback, and information transmission in efferent control and afferent feedback. The study demonstrates for the first time the multiscale and directional characteristics of the FCMC for stroke patients, and provides a preliminary observation for application in clinical assessment following stroke. PMID:29765351
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
Multiscale modeling of sickle anemia blood blow by Dissipative Partice Dynamics
NASA Astrophysics Data System (ADS)
Lei, Huan; Caswell, Bruce; Karniadakis, George
2011-11-01
A multi-scale model for sickle red blood cell is developed based on Dissipative Particle Dynamics (DPD). Different cell morphologies (sickle, granular, elongated shapes) typically observed in in vitro and in vivo are constructed and the deviations from the biconcave shape is quantified by the Asphericity and Elliptical shape factors. The rheology of sickle blood is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. However, the vaso-occulusion phenomenon, reported in a recent microfluid experiment, is not observed in the pipe flow system unless the adhesive interactions between sickle blood cells and endothelium properly introduced into the model.
Quantifying the Hierarchical Order in Self-Aligned Carbon Nanotubes from Atomic to Micrometer Scale.
Meshot, Eric R; Zwissler, Darwin W; Bui, Ngoc; Kuykendall, Tevye R; Wang, Cheng; Hexemer, Alexander; Wu, Kuang Jen J; Fornasiero, Francesco
2017-06-27
Fundamental understanding of structure-property relationships in hierarchically organized nanostructures is crucial for the development of new functionality, yet quantifying structure across multiple length scales is challenging. In this work, we used nondestructive X-ray scattering to quantitatively map the multiscale structure of hierarchically self-organized carbon nanotube (CNT) "forests" across 4 orders of magnitude in length scale, from 2.0 Å to 1.5 μm. Fully resolved structural features include the graphitic honeycomb lattice and interlayer walls (atomic), CNT diameter (nano), as well as the greater CNT ensemble (meso) and large corrugations (micro). Correlating orientational order across hierarchical levels revealed a cascading decrease as we probed finer structural feature sizes with enhanced sensitivity to small-scale disorder. Furthermore, we established qualitative relationships for single-, few-, and multiwall CNT forest characteristics, showing that multiscale orientational order is directly correlated with number density spanning 10 9 -10 12 cm -2 , yet order is inversely proportional to CNT diameter, number of walls, and atomic defects. Lastly, we captured and quantified ultralow-q meridional scattering features and built a phenomenological model of the large-scale CNT forest morphology, which predicted and confirmed that these features arise due to microscale corrugations along the vertical forest direction. Providing detailed structural information at multiple length scales is important for design and synthesis of CNT materials as well as other hierarchically organized nanostructures.
Impact of scale on morphological spatial pattern of forest
Katarzyna Ostapowicz; Peter Vogt; Kurt H. Riitters; Jacek Kozak; Christine Estreguil
2008-01-01
Assessing and monitoring landscape pattern structure from multi-scale land-cover maps can utilize morphological spatial pattern analysis (MSPA), only if various influences of scale are known and taken into account. This paper lays part of the foundation for applying MSPA analysis in landscape monitoring by quantifying scale effects on six classes of spatial patterns...
The impact of climate change on surface level ozone is examined through a multi-scale modeling effort that linked global and regional climate models to drive air quality model simulations. Results are quantified in terms of the Relative Response Factor (RRFE), which es...
Brett G. Dickson; Barry R. Noon; Curtis H. Flather; Stephanie Jentsch; William M. Block
2009-01-01
Landscape-scale disturbance events, including ecological restoration and fuel reduction activities, can modify habitat and affect relationships between species and their environment. To reduce the risk of uncharacteristic stand-replacing fires in the southwestern United States, land managers are implementing restoration and fuels treatments (e.g., mechanical thinning,...
Person-independent facial expression analysis by fusing multiscale cell features
NASA Astrophysics Data System (ADS)
Zhou, Lubing; Wang, Han
2013-03-01
Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Shrink-induced silica multiscale structures for enhanced fluorescence from DNA microarrays.
Sharma, Himanshu; Wood, Jennifer B; Lin, Sophia; Corn, Robert M; Khine, Michelle
2014-09-23
We describe a manufacturable and scalable method for fabrication of multiscale wrinkled silica (SiO2) structures on shrink-wrap film to enhance fluorescence signals in DNA fluorescence microarrays. We are able to enhance the fluorescence signal of hybridized DNA by more than 120 fold relative to a planar glass slide. Notably, our substrate has improved detection sensitivity (280 pM) relative to planar glass slide (11 nM). Furthermore, this is accompanied by a 30-45 times improvement in the signal-to-noise ratio (SNR). Unlike metal enhanced fluorescence (MEF) based enhancements, this is a far-field and uniform effect based on surface concentration and photophysical effects from the nano- to microscale SiO2 structures. Notably, the photophysical effects contribute an almost 2.5 fold enhancement over the concentration effects alone. Therefore, this simple and robust method offers an efficient technique to enhance the detection capabilities of fluorescence based DNA microarrays.
Shrink-Induced Silica Multiscale Structures for Enhanced Fluorescence from DNA Microarrays
2015-01-01
We describe a manufacturable and scalable method for fabrication of multiscale wrinkled silica (SiO2) structures on shrink-wrap film to enhance fluorescence signals in DNA fluorescence microarrays. We are able to enhance the fluorescence signal of hybridized DNA by more than 120 fold relative to a planar glass slide. Notably, our substrate has improved detection sensitivity (280 pM) relative to planar glass slide (11 nM). Furthermore, this is accompanied by a 30–45 times improvement in the signal-to-noise ratio (SNR). Unlike metal enhanced fluorescence (MEF) based enhancements, this is a far-field and uniform effect based on surface concentration and photophysical effects from the nano- to microscale SiO2 structures. Notably, the photophysical effects contribute an almost 2.5 fold enhancement over the concentration effects alone. Therefore, this simple and robust method offers an efficient technique to enhance the detection capabilities of fluorescence based DNA microarrays. PMID:25191785
NASA Astrophysics Data System (ADS)
Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi
2018-06-01
The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.
Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.
Bishop, Steven M; Ercole, Ari
2018-01-01
The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.
Multiscale multifractal detrended-fluctuation analysis of two-dimensional surfaces
NASA Astrophysics Data System (ADS)
Wang, Fang; Fan, Qingju; Stanley, H. Eugene
2016-04-01
Two-dimensional (2D) multifractal detrended fluctuation analysis (MF-DFA) has been used to study monofractality and multifractality on 2D surfaces, but when it is used to calculate the generalized Hurst exponent in a fixed time scale, the presence of crossovers can bias the outcome. To solve this problem, multiscale multifractal analysis (MMA) was recent employed in a one-dimensional case. MMA produces a Hurst surface h (q ,s ) that provides a spectrum of local scaling exponents at different scale ranges such that the positions of the crossovers can be located. We apply this MMA method to a 2D surface and identify factors that influence the results. We generate several synthesized surfaces and find that crossovers are consistently present, which means that their fractal properties differ at different scales. We apply MMA to the surfaces, and the results allow us to observe these differences and accurately estimate the generalized Hurst exponents. We then study eight natural texture images and two real-world images and find (i) that the moving window length (WL) and the slide length (SL) are the key parameters in the MMA method, that the WL more strongly influences the Hurst surface than the SL, and that the combination of WL =4 and SL =4 is optimal for a 2D image; (ii) that the robustness of h (2 ,s ) to four common noises is high at large scales but variable at small scales; and (iii) that the long-term correlations in the images weaken as the intensity of Gaussian noise and salt and pepper noise is increased. Our findings greatly improve the performance of the MMA method on 2D surfaces.
NASA Technical Reports Server (NTRS)
Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.
2014-01-01
A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kadoch, Benjamin; Bos, Wouter
2017-11-01
The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.
Speckle reduction in optical coherence tomography images based on wave atoms
Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping
2014-01-01
Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507
NASA Astrophysics Data System (ADS)
Liang, Xiao; Zang, Yali; Dong, Di; Zhang, Liwen; Fang, Mengjie; Yang, Xin; Arranz, Alicia; Ripoll, Jorge; Hui, Hui; Tian, Jie
2016-10-01
Stripe artifacts, caused by high-absorption or high-scattering structures in the illumination light path, are a common drawback in both unidirectional and multidirectional light sheet fluorescence microscopy (LSFM), significantly deteriorating image quality. To circumvent this problem, we present an effective multidirectional stripe remover (MDSR) method based on nonsubsampled contourlet transform (NSCT), which can be used for both unidirectional and multidirectional LSFM. In MDSR, a fast Fourier transform (FFT) filter is designed in the NSCT domain to shrink the stripe components and eliminate the noise. Benefiting from the properties of being multiscale and multidirectional, MDSR succeeds in eliminating stripe artifacts in both unidirectional and multidirectional LSFM. To validate the method, MDSR has been tested on images from a custom-made unidirectional LSFM system and a commercial multidirectional LSFM system, clearly demonstrating that MDSR effectively removes most of the stripe artifacts. Moreover, we performed a comparative experiment with the variational stationary noise remover and the wavelet-FFT methods and quantitatively analyzed the results with a peak signal-to-noise ratio, showing an improved noise removal when using the MDSR method.
2016-05-23
general model for heterogeneous granular media under compaction and (ii) the lack of a reliable multiscale discrete -to-continuum framework for...dynamics. These include a continuum- discrete model of heat dissipation/diffusion and a continuum- discrete model of compaction of a granular material with...the lack of a general model for het- erogeneous granular media under compac- tion and (ii) the lack of a reliable multi- scale discrete -to-continuum
Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.
Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin
2018-02-01
Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.
Multi-Scale Stochastic Resonance Spectrogram for fault diagnosis of rolling element bearings
NASA Astrophysics Data System (ADS)
He, Qingbo; Wu, Enhao; Pan, Yuanyuan
2018-04-01
It is not easy to identify incipient defect of a rolling element bearing by analyzing the vibration data because of the disturbance of background noise. The weak and unrecognizable transient fault signal of a mechanical system can be enhanced by the stochastic resonance (SR) technique that utilizes the noise in the system. However, it is challenging for the SR technique to identify sensitive fault information in non-stationary signals. This paper proposes a new method called multi-scale SR spectrogram (MSSRS) for bearing defect diagnosis. The new method considers the non-stationary property of the defective bearing vibration signals, and treats every scale of the time-frequency distribution (TFD) as a modulation system. Then the SR technique is utilized on each modulation system according to each frequencies in the TFD. The SR results are sensitive to the defect information because the energy of transient vibration is distributed in a limited frequency band in the TFD. Collecting the spectra of the SR outputs at all frequency scales then generates the MSSRS. The proposed MSSRS is able to well deal with the non-stationary transient signal, and can highlight the defect-induced frequency component corresponding to the impulse information. Experimental results with practical defective bearing vibration data have shown that the proposed method outperforms the former SR methods and exhibits a good application prospect in rolling element bearing fault diagnosis.
Significant characteristics of social response to noise and vibration
NASA Technical Reports Server (NTRS)
Nishinomiya, G.
1979-01-01
Several surveys made since 1971 to investigate annoyance resulting from noise and vibration, from various sources were studied in order to quantify the relation between annoyance response to noise or vibration and properties of the respondent including factors such as noise exposure, etc. Samples collected by the social surveys and physical measurements were analyzed by multi-dimensional analysis.
On a sparse pressure-flow rate condensation of rigid circulation models
Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.
2015-01-01
Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219
NASA Technical Reports Server (NTRS)
Saether, Erik; Hochhalter, Jacob D.; Glaessgen, Edward H.; Mishin, Yuri
2014-01-01
A multiscale modeling methodology is developed for structurally-graded material microstructures. Molecular dynamic (MD) simulations are performed at the nanoscale to determine fundamental failure mechanisms and quantify material constitutive parameters. These parameters are used to calibrate material processes at the mesoscale using discrete dislocation dynamics (DD). Different grain boundary interactions with dislocations are analyzed using DD to predict grain-size dependent stress-strain behavior. These relationships are mapped into crystal plasticity (CP) parameters to develop a computationally efficient finite element-based DD/CP model for continuum-level simulations and complete the multiscale analysis by predicting the behavior of macroscopic physical specimens. The present analysis is focused on simulating the behavior of a graded microstructure in which grain sizes are on the order of nanometers in the exterior region and transition to larger, multi-micron size in the interior domain. This microstructural configuration has been shown to offer improved mechanical properties over homogeneous coarse-grained materials by increasing yield stress while maintaining ductility. Various mesoscopic polycrystal models of structurally-graded microstructures are generated, analyzed and used as a benchmark for comparison between multiscale DD/CP model and DD predictions. A final series of simulations utilize the DD/CP analysis method exclusively to study macroscopic models that cannot be analyzed by MD or DD methods alone due to the model size.
NASA Astrophysics Data System (ADS)
Li, Guang-Xing; Burkert, Andreas
2018-02-01
The interplay between gravity, turbulence and the magnetic field determines the evolution of the molecular interstellar medium (ISM) and the formation of the stars. In spite of growing interests, there remains a lack of understanding of the importance of magnetic field over multiple scales. We derive the magnetic energy spectrum - a measure that constraints the multiscale distribution of the magnetic energy, and compare it with the gravitational energy spectrum derived in Li & Burkert. In our formalism, the gravitational energy spectrum is purely determined by the surface density probability density distribution (PDF), and the magnetic energy spectrum is determined by both the surface density PDF and the magnetic field-density relation. If regions have density PDFs close to P(Σ) ˜ Σ-2 and a universal magnetic field-density relation B ˜ ρ1/2, we expect a multiscale near equipartition between gravity and the magnetic fields. This equipartition is found to be true in NGC 6334, where estimates of magnetic fields over multiple scales (from 0.1 pc to a few parsec) are available. However, the current observations are still limited in sample size. In the future, it is necessary to obtain multiscale measurements of magnetic fields from different clouds with different surface density PDFs and apply our formalism to further study the gravity-magnetic field interplay.
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
NASA Astrophysics Data System (ADS)
Sweeney, C.; Kort, E. A.; Rella, C.; Conley, S. A.; Karion, A.; Lauvaux, T.; Frankenberg, C.
2015-12-01
Along with a boom in oil and natural gas production in the US, there has been a substantial effort to understand the true environmental impact of these operations on air and water quality, as well asnet radiation balance. This multi-institution effort funded by both governmental and non-governmental agencies has provided a case study for identification and verification of emissions using a multi-scale, top-down approach. This approach leverages a combination of remote sensing to identify areas that need specific focus and airborne in-situ measurements to quantify both regional and large- to mid-size single-point emitters. Ground-based networks of mobile and stationary measurements provide the bottom tier of measurements from which process-level information can be gathered to better understand the specific sources and temporal distribution of the emitters. The motivation for this type of approach is largely driven by recent work in the Barnett Shale region in Texas as well as the San Juan Basin in New Mexico and Colorado; these studies suggest that relatively few single-point emitters dominate the regional emissions of CH4.
Multiscale Symbolic Phase Transfer Entropy in Financial Time Series Classification
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
We address the challenge of classifying financial time series via a newly proposed multiscale symbolic phase transfer entropy (MSPTE). Using MSPTE method, we succeed to quantify the strength and direction of information flow between financial systems and classify financial time series, which are the stock indices from Europe, America and China during the period from 2006 to 2016 and the stocks of banking, aviation industry and pharmacy during the period from 2007 to 2016, simultaneously. The MSPTE analysis shows that the value of symbolic phase transfer entropy (SPTE) among stocks decreases with the increasing scale factor. It is demonstrated that MSPTE method can well divide stocks into groups by areas and industries. In addition, it can be concluded that the MSPTE analysis quantify the similarity among the stock markets. The symbolic phase transfer entropy (SPTE) between the two stocks from the same area is far less than the SPTE between stocks from different areas. The results also indicate that four stocks from America and Europe have relatively high degree of similarity and the stocks of banking and pharmaceutical industry have higher similarity for CA. It is worth mentioning that the pharmaceutical industry has weaker particular market mechanism than banking and aviation industry.
Beyond Darcy's law: The role of phase topology and ganglion dynamics for two-fluid flow
Armstrong, Ryan T.; McClure, James E.; Berrill, Mark A.; ...
2016-10-27
Relative permeability quantifies the ease at which immiscible phases flow through porous rock and is one of the most well known constitutive relationships for petroleum engineers. It however exhibits troubling dependencies on experimental conditions and is not a unique function of phase saturation as commonly accepted in industry practices. The problem lies in the multi-scale nature of the problem where underlying disequilibrium processes create anomalous macroscopic behavior. Here we show that relative permeability rate dependencies are explained by ganglion dynamic flow. We utilize fast X-ray micro-tomography and pore-scale simulations to identify unique flow regimes during the fractional flow of immisciblemore » phases and quantify the contribution of ganglion flux to the overall flux of non-wetting phase. We anticipate our approach to be the starting point for the development of sophisticated multi-scale flow models that directly link pore-scale parameters to macro-scale behavior. Such models will have a major impact on how we recover hydrocarbons from the subsurface, store sequestered CO 2 in geological formations, and remove non-aqueous environmental hazards from the vadose zone.« less
Acoustic Prediction State of the Art Assessment
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2007-01-01
The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.
Noise Annoyance in Urban Children: A Cross-Sectional Population-Based Study.
Grelat, Natacha; Houot, Hélène; Pujol, Sophie; Levain, Jean-Pierre; Defrance, Jérôme; Mariet, Anne-Sophie; Mauny, Frédéric
2016-10-28
Acoustical and non-acoustical factors influencing noise annoyance in adults have been well-documented in recent years; however, similar knowledge is lacking in children. The aim of this study was to quantify the annoyance caused by chronic ambient noise at home in children and to assess the relationship between these children's noise annoyance level and individual and contextual factors in the surrounding urban area. A cross sectional population-based study was conducted including 517 children attending primary school in a European city. Noise annoyance was measured using a self-report questionnaire adapted for children. Six noise exposure level indicators were built at different locations at increasing distances from the child's bedroom window using a validated strategic noise map. Multilevel logistic models were constructed to investigate factors associated with noise annoyance in children. Noise indicators in front of the child's bedroom ( p ≤ 0.01), family residential satisfaction ( p ≤ 0.03) and socioeconomic characteristics of the individuals and their neighbourhood ( p ≤ 0.05) remained associated with child annoyance. These findings illustrate the complex relationships between our environment, how we may perceive it, social factors and health. Better understanding of these relationships will undoubtedly allow us to more effectively quantify the actual effect of noise on human health.
Noise Annoyance in Urban Children: A Cross-Sectional Population-Based Study
Grelat, Natacha; Houot, Hélène; Pujol, Sophie; Levain, Jean-Pierre; Defrance, Jérôme; Mariet, Anne-Sophie; Mauny, Frédéric
2016-01-01
Acoustical and non-acoustical factors influencing noise annoyance in adults have been well-documented in recent years; however, similar knowledge is lacking in children. The aim of this study was to quantify the annoyance caused by chronic ambient noise at home in children and to assess the relationship between these children′s noise annoyance level and individual and contextual factors in the surrounding urban area. A cross sectional population-based study was conducted including 517 children attending primary school in a European city. Noise annoyance was measured using a self-report questionnaire adapted for children. Six noise exposure level indicators were built at different locations at increasing distances from the child′s bedroom window using a validated strategic noise map. Multilevel logistic models were constructed to investigate factors associated with noise annoyance in children. Noise indicators in front of the child′s bedroom (p ≤ 0.01), family residential satisfaction (p ≤ 0.03) and socioeconomic characteristics of the individuals and their neighbourhood (p ≤ 0.05) remained associated with child annoyance. These findings illustrate the complex relationships between our environment, how we may perceive it, social factors and health. Better understanding of these relationships will undoubtedly allow us to more effectively quantify the actual effect of noise on human health. PMID:27801858
Multiscale-Driven approach to detecting change in Synthetic Aperture Radar (SAR) imagery
NASA Astrophysics Data System (ADS)
Gens, R.; Hogenson, K.; Ajadi, O. A.; Meyer, F. J.; Myers, A.; Logan, T. A.; Arnoult, K., Jr.
2017-12-01
Detecting changes between Synthetic Aperture Radar (SAR) images can be a useful but challenging exercise. SAR with its all-weather capabilities can be an important resource in identifying and estimating the expanse of events such as flooding, river ice breakup, earthquake damage, oil spills, and forest growth, as it can overcome shortcomings of optical methods related to cloud cover. However, detecting change in SAR imagery can be impeded by many factors including speckle, complex scattering responses, low temporal sampling, and difficulty delineating boundaries. In this presentation we use a change detection method based on a multiscale-driven approach. By using information at different resolution levels, we attempt to obtain more accurate change detection maps in both heterogeneous and homogeneous regions. Integrated within the processing flow are processes that 1) improve classification performance by combining Expectation-Maximization algorithms with mathematical morphology, 2) achieve high accuracy in preserving boundaries using measurement level fusion techniques, and 3) combine modern non-local filtering and 2D-discrete stationary wavelet transform to provide robustness against noise. This multiscale-driven approach to change detection has recently been incorporated into the Alaska Satellite Facility (ASF) Hybrid Pluggable Processing Pipeline (HyP3) using radiometrically terrain corrected SAR images. Examples primarily from natural hazards are presented to illustrate the capabilities and limitations of the change detection method.
Multiscale moment-based technique for object matching and recognition
NASA Astrophysics Data System (ADS)
Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang
2000-03-01
A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.
Noise Exposure Questionnaire (NEQ): A Tool for Quantifying Annual Noise Exposure
Johnson, Tiffany A.; Cooper, Susan; Stamper, Greta C.; Chertoff, Mark
2017-01-01
Background Exposure to both occupational and non-occupational noise is recognized as a risk factor for noise-induced hearing loss (NIHL). Although audiologists routinely inquire regarding history of noise exposure, there are limited tools available for quantifying this history or for identifying those individuals who are at highest risk for NIHL. Identifying those at highest risk would allow hearing conservation activities to be focused on those individuals. Purpose To develop a detailed, task-based questionnaire for quantifying an individual’s annual noise exposure arising from both occupational and non-occupational sources (aim 1) and to develop a short screening tool that could be used to identify individuals at high risk of NIHL (aim 2). Research Design Review of relevant literature for questionnaire development followed by a cross-sectional descriptive and correlational investigation of the newly developed questionnaire and screening tool. Study Sample One hundred fourteen college freshmen completed the detailed questionnaire for estimating annual noise exposure (aim 1) and answered the potential screening questions (aim 2). An additional 59 adults participated in data collection where the accuracy of the screening tool was evaluated (aim 2). Data Collection and Analysis In study aim 1, all subjects completed the detailed questionnaire and the potential screening questions. Descriptive statistics were used to quantify subject participation in various noisy activities and their associated annual noise exposure estimates. In study aim 2, linear regression techniques were used to identify screening questions that could be used to predict a subject’s estimated annual noise exposure. Clinical decision theory was then used to assess the accuracy with which the screening tool predicted high and low risk of NIHL in a new group of subjects. Results Responses on the detailed questionnaire indicated that our sample of college freshmen reported high rates of participation in a variety of occupational and non-occupational activities associated with high sound levels. Although participation rates were high, annual noise exposure estimates were below highest-risk levels for many subjects because the frequency of participation in these activities was low in many cases. These data illustrate how the Noise Exposure Questionnaire (NEQ) could be used to provide detailed and specific information regarding an individual’s exposure to noise. The results of aim 2 suggest that the screening tool, the 1-Minute Noise Screen, can be used to identify those subjects with high- and low-risk noise exposure, allowing more in-depth assessment of noise exposure history to be targeted at those most at risk. Conclusions The NEQ can be used to estimate an individual’s annual noise exposure and the 1-Minute Noise Screen can be used to identify those subjects at highest risk of NIHL. These tools allow audiologists to focus hearing conservation efforts on those individuals who are most in need of those services. PMID:28054909
Evaluating metrics of local topographic position for multiscale geomorphometric analysis
NASA Astrophysics Data System (ADS)
Newman, D. R.; Lindsay, J. B.; Cockburn, J. M. H.
2018-07-01
The field of geomorphometry has increasingly moved towards the use of multiscale analytical techniques, due to the availability of fine-resolution digital elevation models (DEMs) and the inherent scale-dependency of many DEM-derived attributes such as local topographic position (LTP). LTP is useful for landform and soils mapping and numerous other environmental applications. Multiple LTP metrics have been proposed and applied in the literature; however, elevation percentile (EP) is notable for its robustness to elevation error and applicability to non-Gaussian local elevation distributions, both of which are common characteristics of DEM data sets. Multiscale LTP analysis involves the estimation of spatial patterns using a range of neighborhood sizes, traditionally achieved by applying spatial filtering techniques with varying kernel sizes. While EP can be demonstrated to provide accurate estimates of LTP, the computationally intensive method of its calculation makes it unsuited to multiscale LTP analysis, particularly at large neighborhood sizes or with fine-resolution DEMs. This research assessed the suitability of three LTP metrics for multiscale terrain characterization by quantifying their computational efficiency and by comparing their ability to approximate EP spatial patterns under varying topographic conditions. The tested LTP metrics included: deviation from mean elevation (DEV), percent elevation range (PER), and the novel relative topographic position (RTP) index. The results demonstrated that DEV, calculated using the integral image technique, offers fast and scale-invariant computation. DEV spatial patterns were strongly correlated with EP (r2 range of 0.699 to 0.967) under all tested topographic conditions. RTP was also a strong predictor of EP (r2 range of 0.594 to 0.917). PER was the weakest predictor of EP (r2 range of 0.031 to 0.801) without offering a substantial improvement in computational efficiency over RTP. PER was therefore determined to be unsuitable for most multiscale applications. It was concluded that the scale-invariant property offered by the integral image used by the DEV method counters the minor losses in robustness compared to EP, making DEV the optimal LTP metric for multiscale applications.
Shih, Andrew J; Purvis, Jeremy; Radhakrishnan, Ravi
2008-12-01
The complexity in intracellular signaling mechanisms relevant for the conquest of many diseases resides at different levels of organization with scales ranging from the subatomic realm relevant to catalytic functions of enzymes to the mesoscopic realm relevant to the cooperative association of molecular assemblies and membrane processes. Consequently, the challenge of representing and quantifying functional or dysfunctional modules within the networks remains due to the current limitations in our understanding of mesoscopic biology, i.e., how the components assemble into functional molecular ensembles. A multiscale approach is necessary to treat a hierarchy of interactions ranging from molecular (nm, ns) to signaling (microm, ms) length and time scales, which necessitates the development and application of specialized modeling tools. Complementary to multiscale experimentation (encompassing structural biology, mechanistic enzymology, cell biology, and single molecule studies) multiscale modeling offers a powerful and quantitative alternative for the study of functional intracellular signaling modules. Here, we describe the application of a multiscale approach to signaling mediated by the ErbB1 receptor which constitutes a network hub for the cell's proliferative, migratory, and survival programs. Through our multiscale model, we mechanistically describe how point-mutations in the ErbB1 receptor can profoundly alter signaling characteristics leading to the onset of oncogenic transformations. Specifically, we describe how the point mutations induce cascading fragility mechanisms at the molecular scale as well as at the scale of the signaling network to preferentially activate the survival factor Akt. We provide a quantitative explanation for how the hallmark of preferential Akt activation in cell-lines harboring the constitutively active mutant ErbB1 receptors causes these cell-lines to be addicted to ErbB1-mediated generation of survival signals. Consequently, inhibition of ErbB1 activity leads to a remarkable therapeutic response in the addicted cell lines.
Quantifying urban river-aquifer fluid exchange processes: a multi-scale problem.
Ellis, Paul A; Mackay, Rae; Rivett, Michael O
2007-04-01
Groundwater-river exchanges in an urban setting have been investigated through long term field monitoring and detailed modelling of a 7 km reach of the Tame river as it traverses the unconfined Triassic Sandstone aquifer that lies beneath the City of Birmingham, UK. Field investigations and numerical modelling have been completed at a range of spatial and temporal scales from the metre to the kilometre scale and from event (hourly) to multi-annual time scales. The objective has been to quantify the spatial and temporal flow distributions governing mixing processes at the aquifer-river interface that can affect the chemical activity in the hyporheic zone of this urbanised river. The hyporheic zone is defined to be the zone of physical mixing of river and aquifer water. The results highlight the multi-scale controls that govern the fluid exchange distributions that influence the thickness of the mixing zone between urban rivers and groundwater and the patterns of groundwater flow through the bed of the river. The morphologies of the urban river bed and the adjacent river bank sediments are found to be particularly influential in developing the mixing zone at the interface between river and groundwater. Pressure transients in the river are also found to exert an influence on velocity distribution in the bed material. Areas of significant mixing do not appear to be related to the areas of greatest groundwater discharge and therefore this relationship requires further investigation to quantify the actual remedial capacity of the physical hyporheic zone.
Large Deviations for Nonlocal Stochastic Neural Fields
2014-01-01
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297
Fuzzy entropy thresholding and multi-scale morphological approach for microscopic image enhancement
NASA Astrophysics Data System (ADS)
Zhou, Jiancan; Li, Yuexiang; Shen, Linlin
2017-07-01
Microscopic images provide lots of useful information for modern diagnosis and biological research. However, due to the unstable lighting condition during image capturing, two main problems, i.e., high-level noises and low image contrast, occurred in the generated cell images. In this paper, a simple but efficient enhancement framework is proposed to address the problems. The framework removes image noises using a hybrid method based on wavelet transform and fuzzy-entropy, and enhances the image contrast with an adaptive morphological approach. Experiments on real cell dataset were made to assess the performance of proposed framework. The experimental results demonstrate that our proposed enhancement framework increases the cell tracking accuracy to an average of 74.49%, which outperforms the benchmark algorithm, i.e., 46.18%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gratama van Andel, H. A. F.; Venema, H. W.; Streekstra, G. J.
For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed formore » use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.« less
A Process for Assessing NASA's Capability in Aircraft Noise Prediction Technology
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2008-01-01
An acoustic assessment is being conducted by NASA that has been designed to assess the current state of the art in NASA s capability to predict aircraft related noise and to establish baselines for gauging future progress in the field. The process for determining NASA s current capabilities includes quantifying the differences between noise predictions and measurements of noise from experimental tests. The computed noise predictions are being obtained from semi-empirical, analytical, statistical, and numerical codes. In addition, errors and uncertainties are being identified and quantified both in the predictions and in the measured data to further enhance the credibility of the assessment. The content of this paper contains preliminary results, since the assessment project has not been fully completed, based on the contributions of many researchers and shows a select sample of the types of results obtained regarding the prediction of aircraft noise at both the system and component levels. The system level results are for engines and aircraft. The component level results are for fan broadband noise, for jet noise from a variety of nozzles, and for airframe noise from flaps and landing gear parts. There are also sample results for sound attenuation in lined ducts with flow and the behavior of acoustic lining in ducts.
Aviation activities represent an important and unique mode of transportation, but also impact air quality. In this study, we aim to quantify the impact of aircraft on air quality, focusing on aviation-attributable PM2.5 at scales ranging from local (a few kilometers) to continent...
Hatch, Leila T; Clark, Christopher W; Van Parijs, Sofie M; Frankel, Adam S; Ponirakis, Dimitri W
2012-12-01
The effects of chronic exposure to increasing levels of human-induced underwater noise on marine animal populations reliant on sound for communication are poorly understood. We sought to further develop methods of quantifying the effects of communication masking associated with human-induced sound on contact-calling North Atlantic right whales (Eubalaena glacialis) in an ecologically relevant area (~10,000 km(2) ) and time period (peak feeding time). We used an array of temporary, bottom-mounted, autonomous acoustic recorders in the Stellwagen Bank National Marine Sanctuary to monitor ambient noise levels, measure levels of sound associated with vessels, and detect and locate calling whales. We related wind speed, as recorded by regional oceanographic buoys, to ambient noise levels. We used vessel-tracking data from the Automatic Identification System to quantify acoustic signatures of large commercial vessels. On the basis of these integrated sound fields, median signal excess (the difference between the signal-to-noise ratio and the assumed recognition differential) for contact-calling right whales was negative (-1 dB) under current ambient noise levels and was further reduced (-2 dB) by the addition of noise from ships. Compared with potential communication space available under historically lower noise conditions, calling right whales may have lost, on average, 63-67% of their communication space. One or more of the 89 calling whales in the study area was exposed to noise levels ≥120 dB re 1 μPa by ships for 20% of the month, and a maximum of 11 whales were exposed to noise at or above this level during a single 10-min period. These results highlight the limitations of exposure-threshold (i.e., dose-response) metrics for assessing chronic anthropogenic noise effects on communication opportunities. Our methods can be used to integrate chronic and wide-ranging noise effects in emerging ocean-planning forums that seek to improve management of cumulative effects of noise on marine species and their habitats. ©2012 Society for Conservation Biology.
Airport noise impact reduction through operations
NASA Technical Reports Server (NTRS)
Deloach, R.
1981-01-01
The airport-noise levels and annoyance model (ALAMO) developed at NASA Langley Research Center is comprised of a system of computer programs which is capable of quantifying airport community noise impact in terms of noise level, population distribution, and human subjective response to noise. The ALAMO can be used to compare the noise impact of an airport's current operating scenario with the noise impact which would result from some proposed change in airport operations. The relative effectiveness of number of noise-impact reduction alternatives is assessed for a major midwest airport. Significant reductions in noise impact are predicted for certain noise abatement strategies while others are shown to result in relatively little noise relief.
OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform
NASA Astrophysics Data System (ADS)
Nan, F.; Xu, Y.
2017-12-01
OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a is a common receiver gather collected in the Bohai Sea, China. The whole profile is 120km long with 987 shots. The horizontal axis is shot number. The vertical axis is travel time reduced by 6km/s. We use our method to process the data and get a denoised profile figure 2b. After denoising, most of the high frequency noise was suppressed and the seismic phases were clearer.
Noise considerations for remote detection of life signs with microwave Doppler radar.
Nguyen, Dung; Yamada, Shuhei; Park, Byung-Kwon; Lubecke, Victor; Boric-Lubecke, Olga; Host-Madsen, Anders
2007-01-01
This paper describes and quantifies three main sources of baseband noise affecting physiological signals in a direct conversion microwave Doppler radar for life signs detection. They are thermal noise, residual phase noise, and Flicker noise. In order to increase the SNR of physiological signals at baseband, the noise floor, in which the Flicker noise is the most dominant factor, needs to be minimized. This paper shows that with the consideration of the noise factor in our Doppler radar, Flicker noise canceling techniques may drastically reduce the power requirement for heart rate signal detection by as much as a factor of 100.
Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology
Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted
2014-01-01
The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966
Burden of disease caused by local transport in Warsaw, Poland
Tainio, Marko
2015-01-01
Transport is a major source of air pollution, noise, injuries and physical activity in the urban environment. The quantification of the health risks and benefits arising from these factors would provide useful information for the planning of cost-effective mitigation actions. In this study we quantified the burden of disease caused by local transport in the city of Warsaw, Poland. The disability-adjusted life-years (DALYs) were estimated for transport related air pollution (particulate matter (PM), nitrogen oxides (NOx), sulfur dioxide (SO2), benzo[a]pyrene (BaP), cadmium, lead and nickel), noise, injuries and physical activity. Exposure to these factors was based on local and international data, and the exposure-response functions (ERFs) were based on published reviews and recommendations. The uncertainties were quantified and propagated with the Monte Carlo method. Local transport generated air pollution, noise and injuries were estimated to cause approximately 58,000 DALYs in the study area. From this burden 44% was due to air pollution and 46% due to noise. Transport related physical activity was estimated to cause a health benefit of 17,000 DALYs. Main quantified uncertainties were related to disability weight for the annoyance (due to noise) and to the ERFs for fine particulate matter (PM2.5) air pollution and walking. The results indicate that the health burden of transport could be mitigated by reducing motorized transport, which causes air pollution and noise, and by encouraging walking and cycling in the study area. PMID:26516622
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.
Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk
2015-11-02
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.
Dudley, Joel T; Listgarten, Jennifer; Stegle, Oliver; Brenner, Steven E; Parts, Leopold
2015-01-01
Advances in molecular profiling and sensor technologies are expanding the scope of personalized medicine beyond genotypes, providing new opportunities for developing richer and more dynamic multi-scale models of individual health. Recent studies demonstrate the value of scoring high-dimensional microbiome, immune, and metabolic traits from individuals to inform personalized medicine. Efforts to integrate multiple dimensions of clinical and molecular data towards predictive multi-scale models of individual health and wellness are already underway. Improved methods for mining and discovery of clinical phenotypes from electronic medical records and technological developments in wearable sensor technologies present new opportunities for mapping and exploring the critical yet poorly characterized "phenome" and "envirome" dimensions of personalized medicine. There are ambitious new projects underway to collect multi-scale molecular, sensor, clinical, behavioral, and environmental data streams from large population cohorts longitudinally to enable more comprehensive and dynamic models of individual biology and personalized health. Personalized medicine stands to benefit from inclusion of rich new sources and dimensions of data. However, realizing these improvements in care relies upon novel informatics methodologies, tools, and systems to make full use of these data to advance both the science and translational applications of personalized medicine.
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock
Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk
2015-01-01
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938
Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance
NASA Astrophysics Data System (ADS)
Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.
2008-04-01
This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dsilva, Carmeline J., E-mail: cdsilva@princeton.edu; Talmon, Ronen, E-mail: ronen.talmon@yale.edu; Coifman, Ronald R., E-mail: coifman@math.yale.edu
2013-11-14
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certainmore » simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.« less
Nonlinear intrinsic variables and state reconstruction in multiscale simulations
NASA Astrophysics Data System (ADS)
Dsilva, Carmeline J.; Talmon, Ronen; Rabin, Neta; Coifman, Ronald R.; Kevrekidis, Ioannis G.
2013-11-01
Finding informative low-dimensional descriptions of high-dimensional simulation data (like the ones arising in molecular dynamics or kinetic Monte Carlo simulations of physical and chemical processes) is crucial to understanding physical phenomena, and can also dramatically assist in accelerating the simulations themselves. In this paper, we discuss and illustrate the use of nonlinear intrinsic variables (NIV) in the mining of high-dimensional multiscale simulation data. In particular, we focus on the way NIV allows us to functionally merge different simulation ensembles, and different partial observations of these ensembles, as well as to infer variables not explicitly measured. The approach relies on certain simple features of the underlying process variability to filter out measurement noise and systematically recover a unique reference coordinate frame. We illustrate the approach through two distinct sets of atomistic simulations: a stochastic simulation of an enzyme reaction network exhibiting both fast and slow time scales, and a molecular dynamics simulation of alanine dipeptide in explicit water.
Change Detection of Remote Sensing Images by Dt-Cwt and Mrf
NASA Astrophysics Data System (ADS)
Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.
2017-05-01
Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burley, Casey L.; Guo, Yueping
2016-01-01
Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.
A new multi-scale method to reveal hierarchical modular structures in biological networks.
Jiao, Qing-Ju; Huang, Yan; Shen, Hong-Bin
2016-11-15
Biological networks are effective tools for studying molecular interactions. Modular structure, in which genes or proteins may tend to be associated with functional modules or protein complexes, is a remarkable feature of biological networks. Mining modular structure from biological networks enables us to focus on a set of potentially important nodes, which provides a reliable guide to future biological experiments. The first fundamental challenge in mining modular structure from biological networks is that the quality of the observed network data is usually low owing to noise and incompleteness in the obtained networks. The second problem that poses a challenge to existing approaches to the mining of modular structure is that the organization of both functional modules and protein complexes in networks is far more complicated than was ever thought. For instance, the sizes of different modules vary considerably from each other and they often form multi-scale hierarchical structures. To solve these problems, we propose a new multi-scale protocol for mining modular structure (named ISIMB) driven by a node similarity metric, which works in an iteratively converged space to reduce the effects of the low data quality of the observed network data. The multi-scale node similarity metric couples both the local and the global topology of the network with a resolution regulator. By varying this resolution regulator to give different weightings to the local and global terms in the metric, the ISIMB method is able to fit the shape of modules and to detect them on different scales. Experiments on protein-protein interaction and genetic interaction networks show that our method can not only mine functional modules and protein complexes successfully, but can also predict functional modules from specific to general and reveal the hierarchical organization of protein complexes.
NASA Astrophysics Data System (ADS)
AL-Milaji, Karam N.
Examples of superhydrophobic surfaces found in nature such as self-cleaning property of lotus leaf and walking on water ability of water strider have led to an extensive investigation in this area over the past few decades. When a water droplet rests on a textured surface, it may either form a liquid-solid-vapor composite interface by which the liquid droplet partially sits on air pockets or it may wet the surface in which the water replaces the trapped air depending on the surface roughness and the surface chemistry. Super water repellent surfaces have numerous applications in our daily life such as drag reduction, anti-icing, anti-fogging, energy conservation, noise reduction, and self-cleaning. In fact, the same concept could be applied in designing and producing surfaces that repel organic contaminations (e.g. low surface tension liquids). However, superoleophobic surfaces are more challenging to fabricate than superhydrophobic surfaces since the combination of multiscale roughness with re-entrant or overhang structure and surface chemistry must be provided. In this study, simple, cost-effective and potentially scalable techniques, i.e., airbrush and electrospray, were employed for the sake of making superhydrophobic and superoleophobic coatings with random and patterned multiscale surface roughness. Different types of silicon dioxide were utilized in this work to in order to study and to characterize the effect of surface morphology and surface roughness on surface wettability. The experimental findings indicated that super liquid repellent surfaces with high apparent contact angles and extremely low sliding angles were successfully fabricated by combining re-entrant structure, multiscale surface roughness, and low surface energy obtained from chemically treating the fabricated surfaces. In addition to that, the experimental observations regarding producing textured surfaces in mask-assisted electrospray were further validated by simulating the actual working conditions and geometries using COMSOL Multiphysics.
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Multiscale geometric modeling of macromolecules I: Cartesian representation
NASA Astrophysics Data System (ADS)
Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2014-01-01
This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the polarized curvature, for the prediction of protein binding sites.
2016-07-01
characteristics and to examine the sensitivity of using such techniques for evaluating microstructure. In addition to the GUI tool, a manual describing its use has... Evaluating Local Primary Dendrite Arm Spacing Characterization Techniques Using Synthetic Directionally Solidified Dendritic Microstructures, Metallurgical and...driven approach for quanti - fying materials uncertainty in creep deformation and failure of aerspace materials, Multi-scale Structural Mechanics and
NASA Astrophysics Data System (ADS)
van der Linden, Joost H.; Narsilio, Guillermo A.; Tordesillas, Antoinette
2016-08-01
We present a data-driven framework to study the relationship between fluid flow at the macroscale and the internal pore structure, across the micro- and mesoscales, in porous, granular media. Sphere packings with varying particle size distribution and confining pressure are generated using the discrete element method. For each sample, a finite element analysis of the fluid flow is performed to compute the permeability. We construct a pore network and a particle contact network to quantify the connectivity of the pores and particles across the mesoscopic spatial scales. Machine learning techniques for feature selection are employed to identify sets of microstructural properties and multiscale complex network features that optimally characterize permeability. We find a linear correlation (in log-log scale) between permeability and the average closeness centrality of the weighted pore network. With the pore network links weighted by the local conductance, the average closeness centrality represents a multiscale measure of efficiency of flow through the pore network in terms of the mean geodesic distance (or shortest path) between all pore bodies in the pore network. Specifically, this study objectively quantifies a hypothesized link between high permeability and efficient shortest paths that thread through relatively large pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series
NASA Astrophysics Data System (ADS)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2016-02-01
Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.
Some physical and psychological aspects of noise attenuation by vegetation
Donald E. Aylor
1977-01-01
The physical mechanisms governing sound attenuation by foliage, stems, and ground are reviewed. Reflection of sound energy is found to be the primary mechanism. In addition, new experimental results are discussed that help to quantify the psychological effect of a plant barrier on perceived noise level. Listeners judged the loudness of noise transmitted through hemlock...
Validation of FHWA's traffic noise model (TNM) : phase 1
DOT National Transportation Integrated Search
2002-08-01
The Volpe Center Acoustics Facility, in support of the Federal Highway Administration (FHWA) and the California Department of Transportation (Caltrans), has been conducting a study to quantify and assess the accuracy of FHWAs Traffic Noise Model (...
Development of multiscale complexity and multifractality of fetal heart rate variability.
Gierałtowski, Jan; Hoyer, Dirk; Tetschke, Florian; Nowack, Samuel; Schneider, Uwe; Zebrowski, Jan
2013-11-01
During fetal development a complex system grows and coordination over multiple time scales is formed towards an integrated behavior of the organism. Since essential cardiovascular and associated coordination is mediated by the autonomic nervous system (ANS) and the ANS activity is reflected in recordable heart rate patterns, multiscale heart rate analysis is a tool predestined for the diagnosis of prenatal maturation. The analyses over multiple time scales requires sufficiently long data sets while the recordings of fetal heart rate as well as the behavioral states studied are themselves short. Care must be taken that the analysis methods used are appropriate for short data lengths. We investigated multiscale entropy and multifractal scaling exponents from 30 minute recordings of 27 normal fetuses, aged between 23 and 38 weeks of gestational age (WGA) during the quiet state. In multiscale entropy, we found complexity lower than that of non-correlated white noise over all 20 coarse graining time scales investigated. Significant maturation age related complexity increase was strongest expressed at scale 2, both using sample entropy and generalized mutual information as complexity estimates. Multiscale multifractal analysis (MMA) in which the Hurst surface h(q,s) is calculated, where q is the multifractal parameter and s is the scale, was applied to the fetal heart rate data. MMA is a method derived from detrended fluctuation analysis (DFA). We modified the base algorithm of MMA to be applicable for short time series analysis using overlapping data windows and a reduction of the scale range. We looked for such q and s for which the Hurst exponent h(q,s) is most correlated with gestational age. We used this value of the Hurst exponent to predict the gestational age based only on fetal heart rate variability properties. Comparison with the true age of the fetus gave satisfying results (error 2.17±3.29 weeks; p<0.001; R(2)=0.52). In addition, we found that the normally used DFA scale range is non-optimal for fetal age evaluation. We conclude that 30 min recordings are appropriate and sufficient for assessing fetal age by multiscale entropy and multiscale multifractal analysis. The predominant prognostic role of scale 2 heart beats for MSE and scale 39 heart beats (at q=-0.7) for MMA cannot be explored neither by single scale complexity measures nor by standard detrended fluctuation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Intrapartum fetal heart rate classification from trajectory in Sparse SVM feature space.
Spilka, J; Frecon, J; Leonarduzzi, R; Pustelnik, N; Abry, P; Doret, M
2015-01-01
Intrapartum fetal heart rate (FHR) constitutes a prominent source of information for the assessment of fetal reactions to stress events during delivery. Yet, early detection of fetal acidosis remains a challenging signal processing task. The originality of the present contribution are three-fold: multiscale representations and wavelet leader based multifractal analysis are used to quantify FHR variability ; Supervised classification is achieved by means of Sparse-SVM that aim jointly to achieve optimal detection performance and to select relevant features in a multivariate setting ; Trajectories in the feature space accounting for the evolution along time of features while labor progresses are involved in the construction of indices quantifying fetal health. The classification performance permitted by this combination of tools are quantified on a intrapartum FHR large database (≃ 1250 subjects) collected at a French academic public hospital.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Hemakom, Apit; Powezka, Katarzyna; Goverdovsky, Valentin; Jaffer, Usman; Mandic, Danilo P
2017-12-01
A highly localized data-association measure, termed intrinsic synchrosqueezing transform (ISC), is proposed for the analysis of coupled nonlinear and non-stationary multivariate signals. This is achieved based on a combination of noise-assisted multivariate empirical mode decomposition and short-time Fourier transform-based univariate and multivariate synchrosqueezing transforms. It is shown that the ISC outperforms six other combinations of algorithms in estimating degrees of synchrony in synthetic linear and nonlinear bivariate signals. Its advantage is further illustrated in the precise identification of the synchronized respiratory and heart rate variability frequencies among a subset of bass singers of a professional choir, where it distinctly exhibits better performance than the continuous wavelet transform-based ISC. We also introduce an extension to the intrinsic phase synchrony (IPS) measure, referred to as nested intrinsic phase synchrony (N-IPS), for the empirical quantification of physically meaningful and straightforward-to-interpret trends in phase synchrony. The N-IPS is employed to reveal physically meaningful variations in the levels of cooperation in choir singing and performing a surgical procedure. Both the proposed techniques successfully reveal degrees of synchronization of the physiological signals in two different aspects: (i) precise localization of synchrony in time and frequency (ISC), and (ii) large-scale analysis for the empirical quantification of physically meaningful trends in synchrony (N-IPS).
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Qiao, Zijian; Xu, Xuefang; Lin, Jing; Niu, Shantao
2017-09-01
Most traditional overdamped monostable, bistable and even tristable stochastic resonance (SR) methods have three shortcomings in weak characteristic extraction: (1) their potential structures characterized by single stable-state type are insufficient to match with the complicated and diverse mechanical vibration signals; (2) they vulnerably suffer the interference from multiscale noise and largely depend on the help of highpass filters whose parameters are selected subjectively, probably resulting in false detection; and (3) their rescaling factors are fixed as constants generally, thereby ignoring the synergistic effect among vibration signals, potential structures and rescaling factors. These three shortcomings have limited the enhancement ability of SR. To explore the SR potential, this paper initially investigates the SR in a multistable system by calculating its output spectral amplification, further analyzes its output frequency response numerically, then examines the effect of both damping and rescaling factors on output responses and finally presents a promising underdamped SR method with stable-state matching for incipient bearing fault diagnosis. This method has three advantages: (1) the diversity of stable-state types in a multistable potential makes it easy to match with various vibration signals; (2) the underdamped multistable SR, equivalent to a moving nonlinear bandpass filter that is dependent on the rescaling factors, is able to suppress the multiscale noise; and (3) the synergistic effect among vibration signals, potential structures and rescaling and damping factors is achieved using quantum genetic algorithms whose fitness functions are new weighted signal-to-noise ratio (WSNR) instead of SNR. Therefore, the proposed method is expected to possess good enhancement ability. Simulated and experimental data of rolling element bearings demonstrate its effectiveness. The comparison results show that the proposed method is able to obtain higher amplitude at target frequency and larger output WSNR, and performs better than traditional SR methods.
Parks, Susan E; Groch, Karina; Flores, Paulo; Sousa-Lima, Renata; Urazghildiiev, Ildar R
2016-01-01
This study investigates the role of behavioral plasticity in the variation of sound production of southern right whales (Eubalaena australis) in response to changes in the ambient background noise conditions. Data were collected from southern right whales in Brazilian waters in October and November 2011. The goal of this study was to quantify differences in right whale vocalizations recorded in low background noise as a control, fish chorus noise, and vessel noise. Variation in call parameters were detected among the three background noise conditions and have implications for future studies of noise effects on whale sound production.
Optical Johnson noise thermometry
NASA Technical Reports Server (NTRS)
Shepard, R. L.; Blalock, T. V.; Maxey, L. C.; Roberts, M. J.; Simpson, M. L.
1989-01-01
A concept is being explored that an optical analog of the electrical Johnson noise may be used to measure temperature independently of emissivity. The concept is that a laser beam may be modulated on reflection from a hot surface by interaction of the laser photons with the thermally agitated conduction electrons or the lattice phonons, thereby adding noise to the reflected laser beam. If the reflectance noise can be detected and quantified in a background of other noise in the optical and signal processing systems, the reflectance noise may provide a noncontact measurement of the absolute surface temperature and may be independent of the surface's emissivity.
Noise correlations in cosmic microwave background experiments
NASA Technical Reports Server (NTRS)
Dodelson, Scott; Kosowsky, Arthur; Myers, Steven T.
1995-01-01
Many analysis of microwave background experiments neglect the correlation of noise in different frequency of polarization channels. We show that these correlations, should they be present, can lead to serve misinterpretation of an experiment. In particular, correlated noise arising from either electronics or atmosphere may mimic a cosmic signal. We quantify how the likelihood function for a given experiment varies with noise correlation, using both simple analytic models and actual data. For a typical microwave background anisotropy experiment, noise correlations at the level of 1% of the overall noise can seriously reduce the significance of a given detection.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu
2017-07-01
Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.
Bright breathers in nonlinear left-handed metamaterial lattices
NASA Astrophysics Data System (ADS)
Koukouloyannis, V.; Kevrekidis, P. G.; Veldes, G. P.; Frantzeskakis, D. J.; DiMarzio, D.; Lan, X.; Radisic, V.
2018-02-01
In the present work, we examine a prototypical model for the formation of bright breathers in nonlinear left-handed metamaterial lattices. Utilizing the paradigm of nonlinear transmission lines, we build a relevant lattice and develop a quasi-continuum multiscale approximation that enables us to appreciate both the underlying linear dispersion relation and the potential for bifurcation of nonlinear states. We focus here, more specifically, on bright discrete breathers which bifurcate from the lower edge of the linear dispersion relation at wavenumber k=π . Guided by the multiscale analysis, we calculate numerically both the stable inter-site centered and the unstable site-centered members of the relevant family. We quantify the associated stability via Floquet analysis and the Peierls-Nabarro barrier of the energy difference between these branches. Finally, we explore the dynamical implications of these findings towards the potential mobility or lack thereof (pinning) of such breather solutions.
Kendrick, Paul; von Hünerbein, Sabine; Cox, Trevor J
2016-07-01
Microphone wind noise can corrupt outdoor recordings even when wind shields are used. When monitoring wind turbine noise, microphone wind noise is almost inevitable because measurements cannot be made in still conditions. The effect of microphone wind noise on two amplitude modulation (AM) metrics is quantified in a simulation, showing that even at low wind speeds of 2.5 m/s errors of over 4 dBA can result. As microphone wind noise is intermittent, a wind noise detection algorithm is used to automatically find uncorrupted sections of the recording, and so recover the true AM metrics to within ±2/±0.5 dBA.
Gandhi, Diksha; Crotty, Dominic J; Stevens, Grant M; Schmidt, Taly Gilat
2015-11-01
This technical note quantifies the dose and image quality performance of a clinically available organ-dose-based tube current modulation (ODM) technique, using experimental and simulation phantom studies. The investigated ODM implementation reduces the tube current for the anterior source positions, without increasing current for posterior positions, although such an approach was also evaluated for comparison. Axial CT scans at 120 kV were performed on head and chest phantoms on an ODM-equipped scanner (Optima CT660, GE Healthcare, Chalfont St. Giles, England). Dosimeters quantified dose to breast, lung, heart, spine, eye lens, and brain regions for ODM and 3D-modulation (SmartmA) settings. Monte Carlo simulations, validated with experimental data, were performed on 28 voxelized head phantoms and 10 chest phantoms to quantify organ dose and noise standard deviation. The dose and noise effects of increasing the posterior tube current were also investigated. ODM reduced the dose for all experimental dosimeters with respect to SmartmA, with average dose reductions across dosimeters of 31% (breast), 21% (lung), 24% (heart), 6% (spine), 19% (eye lens), and 11% (brain), with similar results for the simulation validation study. In the phantom library study, the average dose reduction across all phantoms was 34% (breast), 20% (lung), 8% (spine), 20% (eye lens), and 8% (brain). ODM increased the noise standard deviation in reconstructed images by 6%-20%, with generally greater noise increases in anterior regions. Increasing the posterior tube current provided similar dose reduction as ODM for breast and eye lens, increased dose to the spine, with noise effects ranging from 2% noise reduction to 16% noise increase. At noise equal to SmartmA, ODM increased the estimated effective dose by 4% and 8% for chest and head scans, respectively. Increasing the posterior tube current further increased the effective dose by 15% (chest) and 18% (head) relative to SmartmA. ODM reduced dose in all experimental and simulation studies over a range of phantoms, while increasing noise. The results suggest a net dose/noise benefit for breast and eye lens for all studied phantoms, negligible lung dose effects for two phantoms, increased lung dose and/or noise for eight phantoms, and increased dose and/or noise for brain and spine for all studied phantoms compared to the reference protocol.
Geostatistical estimation of signal-to-noise ratios for spectral vegetation indices
Ji, Lei; Zhang, Li; Rover, Jennifer R.; Wylie, Bruce K.; Chen, Xuexia
2014-01-01
In the past 40 years, many spectral vegetation indices have been developed to quantify vegetation biophysical parameters. An ideal vegetation index should contain the maximum level of signal related to specific biophysical characteristics and the minimum level of noise such as background soil influences and atmospheric effects. However, accurate quantification of signal and noise in a vegetation index remains a challenge, because it requires a large number of field measurements or laboratory experiments. In this study, we applied a geostatistical method to estimate signal-to-noise ratio (S/N) for spectral vegetation indices. Based on the sample semivariogram of vegetation index images, we used the standardized noise to quantify the noise component of vegetation indices. In a case study in the grasslands and shrublands of the western United States, we demonstrated the geostatistical method for evaluating S/N for a series of soil-adjusted vegetation indices derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. The soil-adjusted vegetation indices were found to have higher S/N values than the traditional normalized difference vegetation index (NDVI) and simple ratio (SR) in the sparsely vegetated areas. This study shows that the proposed geostatistical analysis can constitute an efficient technique for estimating signal and noise components in vegetation indices.
2016-08-31
crack initiation and SCG mechanisms (initiation and growth versus resistance). 2. Final summary Here, we present a hierarchical form of multiscale...prismatic faults in -Ti: A combined quantum mechanics /molecular mechanics study 2. Nano-indentation and slip transfer (critical in understanding crack...initiation) 3. An extended-finite element framework (XFEM) to study SCG mechanisms 4. Atomistic methods to develop a grain and twin boundaries database
Frequency domain analysis of noise in simple gene circuits
NASA Astrophysics Data System (ADS)
Cox, Chris D.; McCollum, James M.; Austin, Derek W.; Allen, Michael S.; Dar, Roy D.; Simpson, Michael L.
2006-06-01
Recent advances in single cell methods have spurred progress in quantifying and analyzing stochastic fluctuations, or noise, in genetic networks. Many of these studies have focused on identifying the sources of noise and quantifying its magnitude, and at the same time, paying less attention to the frequency content of the noise. We have developed a frequency domain approach to extract the information contained in the frequency content of the noise. In this article we review our work in this area and extend it to explicitly consider sources of extrinsic and intrinsic noise. First we review applications of the frequency domain approach to several simple circuits, including a constitutively expressed gene, a gene regulated by transitions in its operator state, and a negatively autoregulated gene. We then review our recent experimental study, in which time-lapse microscopy was used to measure noise in the expression of green fluorescent protein in individual cells. The results demonstrate how changes in rate constants within the gene circuit are reflected in the spectral content of the noise in a manner consistent with the predictions derived through frequency domain analysis. The experimental results confirm our earlier theoretical prediction that negative autoregulation not only reduces the magnitude of the noise but shifts its content out to higher frequency. Finally, we develop a frequency domain model of gene expression that explicitly accounts for extrinsic noise at the transcriptional and translational levels. We apply the model to interpret a shift in the autocorrelation function of green fluorescent protein induced by perturbations of the translational process as a shift in the frequency spectrum of extrinsic noise and a decrease in its weighting relative to intrinsic noise.
Noise Exposure in TKA Surgery; Oscillating Tip Saw Systems vs Oscillating Blade Saw Systems.
Peters, Michiel P; Feczko, Peter Z; Tsang, Karel; van Rietbergen, Bert; Arts, Jacobus J; Emans, Peter J
2016-12-01
Historically it has been suggested that noise-induced hearing loss (NIHL) affects approximately 50% of the orthopedic surgery personnel. This noise may be partially caused by the use of powered saw systems that are used to make the bone cuts. The first goal was to quantify and compare the noise emission of these different saw systems during total knee arthroplasty (TKA) surgery. A second goal was to estimate the occupational NIHL risk for the orthopedic surgery personnel in TKA surgery by quantifying the total daily noise emission spectrum during TKA surgery and to compare this to the Dutch Occupational Health Organization guidelines. A conventional sagittal oscillating blade system with a full oscillating blade and 2 newer oscillating tip saw systems (handpiece and blade) were compared. Noise level measurements during TKA surgery were performed during cutting and hammering, additionally surgery noise profiles were made. The noise level was significantly lower for the oscillating tip saw systems compared to the conventional saw system, but all were in a range that can cause NIHL. The conventional system handpiece produced a considerable higher noise level compared to oscillating tip handpiece. NIHL is an underestimated problem in the orthopedic surgery. Solutions for decreasing the risk of hearing loss should be considered. The use of oscillating tip saw systems have a reduced noise emission in comparison with the conventional saw system. The use of these newer systems might be a first step in decreasing hearing loss among the orthopedic surgery personnel. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J.; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan H.; Molina, Mario J.
2014-01-01
Atmospheric aerosols affect weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the effects of anthropogenic aerosols on the Pacific storm track, using a multiscale global aerosol–climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and preindustrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by −2.5 and +1.3 W m−2, respectively, by emission changes from preindustrial to present day, and an increased cloud top height indicates invigorated midlatitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides, for the first time to the authors’ knowledge, a global perspective of the effects of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multiscale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on a global scale. PMID:24733923
Wang, Yuan; Wang, Minghuai; Zhang, Renyi; Ghan, Steven J; Lin, Yun; Hu, Jiaxi; Pan, Bowen; Levy, Misti; Jiang, Jonathan H; Molina, Mario J
2014-05-13
Atmospheric aerosols affect weather and global general circulation by modifying cloud and precipitation processes, but the magnitude of cloud adjustment by aerosols remains poorly quantified and represents the largest uncertainty in estimated forcing of climate change. Here we assess the effects of anthropogenic aerosols on the Pacific storm track, using a multiscale global aerosol-climate model (GCM). Simulations of two aerosol scenarios corresponding to the present day and preindustrial conditions reveal long-range transport of anthropogenic aerosols across the north Pacific and large resulting changes in the aerosol optical depth, cloud droplet number concentration, and cloud and ice water paths. Shortwave and longwave cloud radiative forcing at the top of atmosphere are changed by -2.5 and +1.3 W m(-2), respectively, by emission changes from preindustrial to present day, and an increased cloud top height indicates invigorated midlatitude cyclones. The overall increased precipitation and poleward heat transport reflect intensification of the Pacific storm track by anthropogenic aerosols. Hence, this work provides, for the first time to the authors' knowledge, a global perspective of the effects of Asian pollution outflows from GCMs. Furthermore, our results suggest that the multiscale modeling framework is essential in producing the aerosol invigoration effect of deep convective clouds on a global scale.
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Scattering transform and LSPTSVM based fault diagnosis of rotating machinery
NASA Astrophysics Data System (ADS)
Ma, Shangjun; Cheng, Bo; Shang, Zhaowei; Liu, Geng
2018-05-01
This paper proposes an algorithm for fault diagnosis of rotating machinery to overcome the shortcomings of classical techniques which are noise sensitive in feature extraction and time consuming for training. Based on the scattering transform and the least squares recursive projection twin support vector machine (LSPTSVM), the method has the advantages of high efficiency and insensitivity for noise signal. Using the energy of the scattering coefficients in each sub-band, the features of the vibration signals are obtained. Then, an LSPTSVM classifier is used for fault diagnosis. The new method is compared with other common methods including the proximal support vector machine, the standard support vector machine and multi-scale theory by using fault data for two systems, a motor bearing and a gear box. The results show that the new method proposed in this study is more effective for fault diagnosis of rotating machinery.
A generalized weight-based particle-in-cell simulation scheme
NASA Astrophysics Data System (ADS)
Lee, W. W.; Jenkins, T. G.; Ethier, S.
2011-03-01
A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.
A Novel Defect Inspection Method for Semiconductor Wafer Based on Magneto-Optic Imaging
NASA Astrophysics Data System (ADS)
Pan, Z.; Chen, L.; Li, W.; Zhang, G.; Wu, P.
2013-03-01
The defects of semiconductor wafer may be generated from the manufacturing processes. A novel defect inspection method of semiconductor wafer is presented in this paper. The method is based on magneto-optic imaging, which involves inducing eddy current into the wafer under test, and detecting the magnetic flux associated with eddy current distribution in the wafer by exploiting the Faraday rotation effect. The magneto-optic image being generated may contain some noises that degrade the overall image quality, therefore, in this paper, in order to remove the unwanted noise present in the magneto-optic image, the image enhancement approach using multi-scale wavelet is presented, and the image segmentation approach based on the integration of watershed algorithm and clustering strategy is given. The experimental results show that many types of defects in wafer such as hole and scratch etc. can be detected by the method proposed in this paper.
Beyond Frangi: an improved multiscale vesselness filter
NASA Astrophysics Data System (ADS)
Jerman, Tim; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga
2015-03-01
Vascular diseases are among the top three causes of death in the developed countries. Effective diagnosis of vascular pathologies from angiographic images is therefore very important and usually relies on segmentation and visualization of vascular structures. To enhance the vascular structures prior to their segmentation and visualization, and to suppress non-vascular structures and image noise, the filters enhancing vascular structures are used extensively. Even though several enhancement filters are widely used, the responses of these filters are typically not uniform between vessels of different radii and, compared to the response in the central part of vessels, their response is lower at vessels' edges and bifurcations, and vascular pathologies like aneurysm. In this paper, we propose a novel enhancement filter based on ratio of multiscale Hessian eigenvalues, which yields a close-to-uniform response in all vascular structures and accurately enhances the border between the vascular structures and the background. The proposed and four state-of-the-art enhancement filters were evaluated and compared on a 3D synthetic image containing tubular structures and a clinical dataset of 15 cerebral 3D digitally subtracted angiograms with manual expert segmentations. The evaluation was based on quantitative metrics of segmentation performance, computed as area under the precision-recall curve, signal-to-noise ratio of the vessel enhancement and the response uniformity within vascular structures. The proposed filter achieved the best scores in all three metrics and thus has a high potential to further improve the performance of existing or encourage the development of more advanced methods for segmentation and visualization of vascular structures.
In Flight Calibration of the Magnetospheric Multiscale Mission Fast Plasma Investigation
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Gershman, Daniel J.; Gliese, Ulrik; Dorelli, John C.; Avanov, Levon A.; Rager, Amy C.; Schiff, Conrad; Pollock, Craig J.
2015-01-01
The Fast Plasma Investigation (FPI) on the Magnetospheric Multiscale mission (MMS) combines data from eight spectrometers, each with four deflection states, into a single map of the sky. Any systematic discontinuity, artifact, noise source, etc. present in this map may be incorrectly interpreted as legitimate data and incorrect conclusions reached. For this reason it is desirable to have all spectrometers return the same output for a given input, and for this output to be low in noise sources or other errors. While many missions use statistical analyses of data to calibrate instruments in flight, this process is insufficient with FPI for two reasons: 1. Only a small fraction of high resolution data is downloaded to the ground due to bandwidth limitations and 2: The data that is downloaded is, by definition, scientifically interesting and therefore not ideal for calibration. FPI uses a suite of new tools to calibrate in flight. A new method for detection system ground calibration has been developed involving sweeping the detection threshold to fully define the pulse height distribution. This method has now been extended for use in flight as a means to calibrate MCP voltage and threshold (together forming the operating point) of the Dual Electron Spectrometers (DES) and Dual Ion Spectrometers (DIS). A method of comparing higher energy data (which has low fractional voltage error) to lower energy data (which has a higher fractional voltage error) will be used to calibrate the high voltage outputs. Finally, a comparison of pitch angle distributions will be used to find remaining discrepancies among sensors.
The costs of chronic noise exposure for terrestrial organisms.
Barber, Jesse R; Crooks, Kevin R; Fristrup, Kurt M
2010-03-01
Growth in transportation networks, resource extraction, motorized recreation and urban development is responsible for chronic noise exposure in most terrestrial areas, including remote wilderness sites. Increased noise levels reduce the distance and area over which acoustic signals can be perceived by animals. Here, we review a broad range of findings that indicate the potential severity of this threat to diverse taxa, and recent studies that document substantial changes in foraging and anti-predator behavior, reproductive success, density and community structure in response to noise. Effective management of protected areas must include noise assessment, and research is needed to further quantify the ecological consequences of chronic noise exposure in terrestrial environments.
Quantifying the role of noise on droplet decisions in bifurcating microchannels
NASA Astrophysics Data System (ADS)
Norouzi Darabad, Masoud; Vaughn, Mark; Vanapalli, Siva
2017-11-01
While many aspects of path selection of droplets flowing through a bifurcating microchannel have been studied, there are still unaddressed issues in predicting and controlling droplet traffic. One of the more important is understanding origin of aperiodic patterns. As a new tool to investigate this phenomena we propose monitoring the continuous time response of pressure fluctuations at different locations. Then we use time-series analysis to investigate the dynamics of the system. We suggest that natural system noise is the cause of irregularity in the traffic patterns. Using a mathematical model, we investigate the effect of noise on droplet decisions at the junction. Noise can be derived from different sources including droplet size variation, droplet spacing, and pump induced velocity fluctuation. By analyzing different situations we explain system behavior. We also investigate the ``memory'' of a microfluidic system in terms of the resistance to perturbations that quantify the allowable deviation in operating condition before the system changes state.
Fractal Branching in Vascular Trees and Networks by VESsel GENeration Analysis (VESGEN)
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia A.
2016-01-01
Vascular patterning offers an informative multi-scale, fractal readout of regulatory signaling by complex molecular pathways. Understanding such molecular crosstalk is important for physiological, pathological and therapeutic research in Space Biology and Astronaut countermeasures. When mapped out and quantified by NASA's innovative VESsel GENeration Analysis (VESGEN) software, remodeling vascular patterns become useful biomarkers that advance out understanding of the response of biology and human health to challenges such as microgravity and radiation in space environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beskardes, G. D.; Weiss, Chester J.; Everett, M. E.
Electromagnetic responses reflect the interaction between applied electromagnetic fields and heterogeneous geoelectrical structures. Here by quantifying the relationship between multi-scale electrical properties and the observed electromagnetic response is therefore important for meaningful geologic interpretation. Furthermore, we present here examples of near-surface electromagnetic responses whose spatial fluctuations appear on all length scales, are repeatable and fractally distributed, suggesting that the spatial fluctuations may be considered as “geologic noise”.
Beskardes, G. D.; Weiss, Chester J.; Everett, M. E.
2016-11-30
Electromagnetic responses reflect the interaction between applied electromagnetic fields and heterogeneous geoelectrical structures. Here by quantifying the relationship between multi-scale electrical properties and the observed electromagnetic response is therefore important for meaningful geologic interpretation. Furthermore, we present here examples of near-surface electromagnetic responses whose spatial fluctuations appear on all length scales, are repeatable and fractally distributed, suggesting that the spatial fluctuations may be considered as “geologic noise”.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Claire; Bloomer, Breaunnah E.; Provis, John L.
2012-05-16
With the ever increasing demands for technologically advanced structural materials, together with emerging environmental consciousness due to climate change, geopolymer cement is fast becoming a viable alternative to traditional cements due to proven mechanical engineering characteristics and the reduction in CO2 emitted (approximately 80% less CO2 emitted compared to ordinary Portland cement). Nevertheless, much remains unknown regarding the kinetics of the molecular changes responsible for nanostructural evolution during the geopolymerization process. Here, in-situ total scattering measurements in the form of X-ray pair distribution function (PDF) analysis are used to quantify the extent of reaction of metakaolin/slag alkali-activated geopolymer binders, includingmore » the effects of various activators (alkali hydroxide/silicate) on the kinetics of the geopolymerization reaction. Restricting quantification of the kinetics to the initial ten hours of reaction does not enable elucidation of the true extent of the reaction, but using X-ray PDF data obtained after 128 days of reaction enables more accurate determination of the initial extent of reaction. The synergies between the in-situ X-ray PDF data and simulations conducted by multiscale density functional theory-based coarse-grained Monte Carlo analysis are outlined, particularly with regard to the potential for the X-ray data to provide a time scale for kinetic analysis of the extent of reaction obtained from the multiscale simulation methodology.« less
A multiscale analysis of coral reef topographic complexity using lidar-derived bathymetry
Zawada, D.G.; Brock, J.C.
2009-01-01
Coral reefs represent one of the most irregular substrates in the marine environment. This roughness or topographic complexity is an important structural characteristic of reef habitats that affects a number of ecological and environmental attributes, including species diversity and water circulation. Little is known about the range of topographic complexity exhibited within a reef or between different reef systems. The objective of this study was to quantify topographic complexity for a 5-km x 5-km reefscape along the northern Florida Keys reef tract, over spatial scales ranging from meters to hundreds of meters. The underlying dataset was a 1-m spatial resolution, digital elevation model constructed from lidar measurements. Topographic complexity was quantified using a fractal algorithm, which provided a multi-scale characterization of reef roughness. The computed fractal dimensions (D) are a measure of substrate irregularity and are bounded between values of 2 and 3. Spatial patterns in D were positively correlated with known reef zonation in the area. Landward regions of the study site contain relatively smooth (D ??? 2.35) flat-topped patch reefs, which give way to rougher (D ??? 2.5), deep, knoll-shaped patch reefs. The seaward boundary contains a mixture of substrate features, including discontinuous shelf-edge reefs, and exhibits a corresponding range of roughness values (2.28 ??? D ??? 2.61). ?? 2009 Coastal Education and Research Foundation.
NASA Astrophysics Data System (ADS)
Borghesani, P.; Antoni, J.
2017-06-01
Second-order cyclostationary (CS2) analysis has become popular in the field of machine diagnostics and a series of digital signal processing techniques have been developed to extract CS2 components from the background noise. Among those techniques, squared envelope spectrum (SES) and cyclic modulation spectrum (CMS) have gained popularity thanks to their high computational efficiency and simple implementation. The effectiveness of CMS and SES has been previously quantified based on the hypothesis of Gaussian background noise and has led to statistical tests for the presence of CS2 peaks in squared envelope spectra and cyclic modulation spectra. However a recently established link of CMS with SES and of SES with kurtosis has exposed a potential weakness of those indicators in the case of highly leptokurtic background noise. This case is often present in practice when the machine is subjected to highly impulsive phenomena, either due to harsh operating conditions or to electric noise generated by power electronics and captured by the sensor. This study investigates and quantifies for the first time the effect of leptokurtic noise on the capabilities of SES and CMS, by analysing three progressively harsh situations: high kurtosis, infinite kurtosis and alpha-stable background noise (for which even first and second-order moments are not defined). Then the resilience of a recently proposed family of CS2 indicators, based on the log-envelope, is verified analytically, numerically and experimentally in the case of highly leptokurtic noise.
A multiscale method for a robust detection of the default mode network
NASA Astrophysics Data System (ADS)
Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo
2013-11-01
The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.
Yang, Yu-Jiao; Wang, Shuai; Zhang, Biao; Shen, Hong-Bin
2018-06-25
As a relatively new technology to solve the three-dimensional (3D) structure of a protein or protein complex, single-particle reconstruction (SPR) of cryogenic electron microscopy (cryo-EM) images shows much superiority and is in a rapidly developing stage. Resolution measurement in SPR, which evaluates the quality of a reconstructed 3D density map, plays a critical role in promoting methodology development of SPR and structural biology. Because there is no benchmark map in the generation of a new structure, how to realize the resolution estimation of a new map is still an open problem. Existing approaches try to generate a hypothetical benchmark map by reconstructing two 3D models from two halves of the original 2D images for cross-reference, which may result in a premature estimation with a half-data model. In this paper, we report a new self-reference-based resolution estimation protocol, called SRes, that requires only a single reconstructed 3D map. The core idea of SRes is to perform a multiscale spectral analysis (MSSA) on the map through multiple size-variable masks segmenting the map. The MSSA-derived multiscale spectral signal-to-noise ratios (mSSNRs) reveal that their corresponding estimated resolutions will show a cliff jump phenomenon, indicating a significant change in the SSNR properties. The critical point on the cliff borderline is demonstrated to be the right estimator for the resolution of the map.
Extended AIC model based on high order moments and its application in the financial market
NASA Astrophysics Data System (ADS)
Mao, Xuegeng; Shang, Pengjian
2018-07-01
In this paper, an extended method of traditional Akaike Information Criteria(AIC) is proposed to detect the volatility of time series by combining it with higher order moments, such as skewness and kurtosis. Since measures considering higher order moments are powerful in many aspects, the properties of asymmetry and flatness can be observed. Furthermore, in order to reduce the effect of noise and other incoherent features, we combine the extended AIC algorithm with multiscale wavelet analysis, in which the newly extended AIC algorithm is applied to wavelet coefficients at several scales and the time series are reconstructed by wavelet transform. After that, we create AIC planes to derive the relationship among AIC values using variance, skewness and kurtosis respectively. When we test this technique on the financial market, the aim is to analyze the trend and volatility of the closing price of stock indices and classify them. And we also adapt multiscale analysis to measure complexity of time series over a range of scales. Empirical results show that the singularity of time series in stock market can be detected via extended AIC algorithm.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
Multiscale Modeling of Diffusion in a Crowded Environment.
Meinecke, Lina
2017-11-01
We present a multiscale approach to model diffusion in a crowded environment and its effect on the reaction rates. Diffusion in biological systems is often modeled by a discrete space jump process in order to capture the inherent noise of biological systems, which becomes important in the low copy number regime. To model diffusion in the crowded cell environment efficiently, we compute the jump rates in this mesoscopic model from local first exit times, which account for the microscopic positions of the crowding molecules, while the diffusing molecules jump on a coarser Cartesian grid. We then extract a macroscopic description from the resulting jump rates, where the excluded volume effect is modeled by a diffusion equation with space-dependent diffusion coefficient. The crowding molecules can be of arbitrary shape and size, and numerical experiments demonstrate that those factors together with the size of the diffusing molecule play a crucial role on the magnitude of the decrease in diffusive motion. When correcting the reaction rates for the altered diffusion we can show that molecular crowding either enhances or inhibits chemical reactions depending on local fluctuations of the obstacle density.
Fault detection method for railway wheel flat using an adaptive multiscale morphological filter
NASA Astrophysics Data System (ADS)
Li, Yifan; Zuo, Ming J.; Lin, Jianhui; Liu, Jianxin
2017-02-01
This study explores the capacity of the morphology analysis for railway wheel flat fault detection. A dynamic model of vehicle systems with 56 degrees of freedom was set up along with a wheel flat model to calculate the dynamic responses of axle box. The vehicle axle box vibration signal is complicated because it not only contains the information of wheel defect, but also includes track condition information. Thus, how to extract the influential features of wheels from strong background noise effectively is a typical key issue for railway wheel fault detection. In this paper, an algorithm for adaptive multiscale morphological filtering (AMMF) was proposed, and its effect was evaluated by a simulated signal. And then this algorithm was employed to study the axle box vibration caused by wheel flats, as well as the influence of track irregularity and vehicle running speed on diagnosis results. Finally, the effectiveness of the proposed method was verified by bench testing. Research results demonstrate that the AMMF extracts the influential characteristic of axle box vibration signals effectively and can diagnose wheel flat faults in real time.
Guan, Shane; Vignola, Joseph; Judge, John; Turo, Diego
2015-12-01
Offshore oil and gas exploration using seismic airguns generates intense underwater pulses that could cause marine mammal hearing impairment and/or behavioral disturbances. However, few studies have investigated the resulting multipath propagation and reverberation from airgun pulses. This research uses continuous acoustic recordings collected in the Arctic during a low-level open-water shallow marine seismic survey, to measure noise levels between airgun pulses. Two methods were used to quantify noise levels during these inter-pulse intervals. The first, based on calculating the root-mean-square sound pressure level in various sub-intervals, is referred to as the increment computation method, and the second, which employs the Hilbert transform to calculate instantaneous acoustic amplitudes, is referred to as the Hilbert transform method. Analyses using both methods yield similar results, showing that the inter-pulse sound field exceeds ambient noise levels by as much as 9 dB during relatively quiet conditions. Inter-pulse noise levels are also related to the source distance, probably due to the higher reverberant conditions of the very shallow water environment. These methods can be used to quantify acoustic environment impacts from anthropogenic transient noises (e.g., seismic pulses, impact pile driving, and sonar pings) and to address potential acoustic masking affecting marine mammals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, Diksha; Schmidt, Taly Gilat, E-mail: taly.gilat-schmidt@marquette.edu; Crotty, Dominic J.
Purpose: This technical note quantifies the dose and image quality performance of a clinically available organ-dose-based tube current modulation (ODM) technique, using experimental and simulation phantom studies. The investigated ODM implementation reduces the tube current for the anterior source positions, without increasing current for posterior positions, although such an approach was also evaluated for comparison. Methods: Axial CT scans at 120 kV were performed on head and chest phantoms on an ODM-equipped scanner (Optima CT660, GE Healthcare, Chalfont St. Giles, England). Dosimeters quantified dose to breast, lung, heart, spine, eye lens, and brain regions for ODM and 3D-modulation (SmartmA) settings.more » Monte Carlo simulations, validated with experimental data, were performed on 28 voxelized head phantoms and 10 chest phantoms to quantify organ dose and noise standard deviation. The dose and noise effects of increasing the posterior tube current were also investigated. Results: ODM reduced the dose for all experimental dosimeters with respect to SmartmA, with average dose reductions across dosimeters of 31% (breast), 21% (lung), 24% (heart), 6% (spine), 19% (eye lens), and 11% (brain), with similar results for the simulation validation study. In the phantom library study, the average dose reduction across all phantoms was 34% (breast), 20% (lung), 8% (spine), 20% (eye lens), and 8% (brain). ODM increased the noise standard deviation in reconstructed images by 6%–20%, with generally greater noise increases in anterior regions. Increasing the posterior tube current provided similar dose reduction as ODM for breast and eye lens, increased dose to the spine, with noise effects ranging from 2% noise reduction to 16% noise increase. At noise equal to SmartmA, ODM increased the estimated effective dose by 4% and 8% for chest and head scans, respectively. Increasing the posterior tube current further increased the effective dose by 15% (chest) and 18% (head) relative to SmartmA. Conclusions: ODM reduced dose in all experimental and simulation studies over a range of phantoms, while increasing noise. The results suggest a net dose/noise benefit for breast and eye lens for all studied phantoms, negligible lung dose effects for two phantoms, increased lung dose and/or noise for eight phantoms, and increased dose and/or noise for brain and spine for all studied phantoms compared to the reference protocol.« less
A laboratory study of subjective annoyance response to sonic booms and aircraft flyovers
NASA Technical Reports Server (NTRS)
Leatherwood, Jack D.; Sullivan, Brenda M.
1994-01-01
Three experiments were conducted to determine subjective equivalence of aircraft subsonic flyover noise and sonic booms. Two of the experiments were conducted in a loudspeaker-driven sonic boom simulator, and the third in a large room containing conventional loudspeakers. The sound generation system of the boom simulator had a frequency response extending to very low frequencies (about 1 Hz) whereas the large room loudspeakers were limited to about 20 Hz. Subjective equivalence between booms and flyovers was quantified in terms of the difference between the noise level of a boom and that of a flyover when the two were judged equally annoying. Noise levels were quantified in terms of the following noise descriptors: Perceived Level (PL), Perceived Noise Level (PNL), C-weighted sound exposure level (SELC), and A-weighted sound exposure level (SELA). Results from the present study were compared, where possible, to similar results obtained in other studies. Results showed that noise level differences depended upon the descriptor used, specific boom and aircraft noise events being compared and, except for the PNL descriptor, varied between the simulator and large room. Comparison of noise level differences obtained in the present study with those of other studies indicated good agreement across studies only for the PNL and SELA descriptors. Comparison of the present results with assessments of community response to high-energy impulsive sounds made by Working Group 84 of the National Research Council's Committee on Hearing, Bioacoustics, and Biomechanics (CHABA) showed good agreement when boom/flyover noise level differences were based on SELA. However, noise level differences obtained by CHABA using SELA for aircraft flyovers and SELC for booms were not in agreement with results obtained in the present study.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Yu, Xiuling; Lu, Shenggao
2016-12-01
Technogenic magnetic particles (TMPs) are carriers of heavy metals and organic contaminants, which derived from anthropogenic activities. However, little information on the relationship between heavy metals and TMP carrier phases at the micrometer scale is available. This study determined the distribution and association of heavy metals and magnetic phases in TMPs in three contaminated soils at the micrometer scale using micro-X-ray fluorescence (μ-XRF) and micro-X-ray absorption near-edge structure (μ-XANES) spectroscopy. Multiscale correlations of heavy metals in TMPs were elucidated using wavelet transform analysis. μ-XRF mapping showed that Fe was enriched and closely correlated with Co, Cr, and Pb in TMPs from steel industrial areas. Fluorescence mapping and wavelet analysis showed that ferroalloy was a major magnetic signature and heavy metal carrier in TMPs, because most heavy metals were highly associated with ferroalloy at all size scales. Multiscale analysis revealed that heavy metals in the TMPs were from multiple sources. Iron K-edge μ-XANES spectra revealed that metallic iron, ferroalloy, and magnetite were the main iron magnetic phases in the TMPs. The relative percentage of these magnetic phases depended on their emission sources. Heatmap analysis revealed that Co, Pb, Cu, Cr, and Ni were mainly derived from ferroalloy particles, while As was derived from both ferroalloy and metallic iron phases. Our results indicated the scale-dependent correlations of magnetic phases and heavy metals in TMPs. The combination of synchrotron based X-ray microprobe techniques and multiscale analysis provides a powerful tool for identifying the magnetic phases from different sources and quantifying the association of iron phases and heavy metals at micrometer scale. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sensory Metrics of Neuromechanical Trust.
Softky, William; Benford, Criscillia
2017-09-01
Today digital sources supply a historically unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet addiction disorder and Internet gaming disorder. Because both natural and digital sensorimotor data share common mathematical descriptions, one can quantify our informational sensorimotor needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Such metrics describe in neutral terms the informational diet human brains require to self-calibrate, allowing individuals to maintain trusting relationships. With these metrics, we define the trust humans experience using the mathematical language of computational models, that is, as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. Our hypersimplified mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction allows equally broad-brush descriptions of failure modes and solutions. For example, we model addiction in general as the result of homeostatic regulation gone awry in novel environments (sign reversal) and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and that individuals can improve sensory and sociosensory resolution through deliberate sensory reintegration practices. We conclude that we humans are the victims of our own success, our hands so skilled they fill the world with captivating things, our eyes so innocent they follow eagerly.
The Spatial Distribution of Resolved Young Stars in Blue Compact Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Murphy, K.; Crone, M. M.
2002-12-01
We present the first results from a survey of the distribution of resolved young stars in Blue Compact Dwarf Galaxies. In order to identify the dominant physical processes driving star formation in these puzzling galaxies, we use a multi-scale cluster-finding algorithm to quantify the characteristic scales and properties of star-forming regions, from sizes smaller than 10 pc up to the size of each entire galaxy. This project was partially funded by the Lubin Chair at Skidmore College.
Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation
Wang, Yan; Swiler, Laura
2017-09-07
The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.
Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yan; Swiler, Laura
The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.
Multiscale Analysis of a Collapsible Respiratory Airway
NASA Astrophysics Data System (ADS)
Ghadiali, Samir; Bell, E. David; Swarts, J. Douglas
2006-11-01
The Eustachian tube (ET) is a collapsible respiratory airway that connects the nasopharynx with the middle ear (ME). The ET normally exists in a collapsed state and must be periodically opened to maintain a healthy and sterile ME. Although the inability to open the ET (i.e. ET dysfunction) is the primary etiology responsible for several common ME diseases (i.e. Otitis Media), the mechanisms responsible for ET dysfunction are not well established. To investigate these mechanisms, we developed a multi-scale model of airflow in the ET and correlated model results with experimental data obtained in healthy and diseased subjects. The computational models utilized finite-element methods to simulate fluid-structure interactions and molecular dynamics techniques to quantify the adhesive properties of mucus glycoproteins. Results indicate that airflow in the ET is highly sensitive to both the dynamics of muscle contraction and molecular adhesion forces within the ET lumen. In addition, correlation of model results with experimental data obtained in diseased subjects was used to identify the biomechanical mechanisms responsible for ET dysfunction.
Das, Debanjan; Shiladitya, Kumar; Biswas, Karabi; Dutta, Pranab Kumar; Parekh, Aditya; Mandal, Mahitosh; Das, Soumen
2015-12-01
The paper presents a study to differentiate normal and cancerous cells using label-free bioimpedance signal measured by electric cell-substrate impedance sensing. The real-time-measured bioimpedance data of human breast cancer cells and human epithelial normal cells employs fluctuations of impedance value due to cellular micromotions resulting from dynamic structural rearrangement of membrane protrusions under nonagitated condition. Here, a wavelet-based multiscale quantitative analysis technique has been applied to analyze the fluctuations in bioimpedance. The study demonstrates a method to classify cancerous and normal cells from the signature of their impedance fluctuations. The fluctuations associated with cellular micromotion are quantified in terms of cellular energy, cellular power dissipation, and cellular moments. The cellular energy and power dissipation are found higher for cancerous cells associated with higher micromotions in cancer cells. The initial study suggests that proposed wavelet-based quantitative technique promises to be an effective method to analyze real-time bioimpedance signal for distinguishing cancer and normal cells.
Phase unwrapping in digital holography based on non-subsampled contourlet transform
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-01-01
In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.
Helicopter Flight Procedures for Community Noise Reduction
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2017-01-01
A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
Iwabuchi, Sadahiro; Kakazu, Yasuhiro; Koh, Jin-Young; Harata, N Charles
2014-02-15
Images in biomedical imaging research are often affected by non-specific background noise. This poses a serious problem when the noise overlaps with specific signals to be quantified, e.g. for their number and intensity. A simple and effective means of removing background noise is to prepare a filtered image that closely reflects background noise and to subtract it from the original unfiltered image. This approach is in common use, but its effectiveness in identifying and quantifying synaptic puncta has not been characterized in detail. We report on our assessment of the effectiveness of isolating punctate signals from diffusely distributed background noise using one variant of this approach, "Difference of Gaussian(s) (DoG)" which is based on a Gaussian filter. We evaluated immunocytochemically stained, cultured mouse hippocampal neurons as an example, and provided the rationale for choosing specific parameter values for individual steps in detecting glutamatergic nerve terminals. The intensity and width of the detected puncta were proportional to those obtained by manual fitting of two-dimensional Gaussian functions to the local information in the original image. DoG was compared with the rolling-ball method, using biological data and numerical simulations. Both methods removed background noise, but differed slightly with respect to their efficiency in discriminating neighboring peaks, as well as their susceptibility to high-frequency noise and variability in object size. DoG will be useful in detecting punctate signals, once its characteristics are examined quantitatively by experimenters. Copyright © 2013 Elsevier B.V. All rights reserved.
The properties of the anti-tumor model with coupling non-Gaussian noise and Gaussian colored noise
NASA Astrophysics Data System (ADS)
Guo, Qin; Sun, Zhongkui; Xu, Wei
2016-05-01
The anti-tumor model with correlation between multiplicative non-Gaussian noise and additive Gaussian-colored noise has been investigated in this paper. The behaviors of the stationary probability distribution demonstrate that the multiplicative non-Gaussian noise plays a dual role in the development of tumor and an appropriate additive Gaussian colored noise can lead to a minimum of the mean value of tumor cell population. The mean first passage time is calculated to quantify the effects of noises on the transition time of tumors between the stable states. An increase in both the non-Gaussian noise intensity and the departure from the Gaussian noise can accelerate the transition from the disease state to the healthy state. On the contrary, an increase in cross-correlated degree will slow down the transition. Moreover, the correlation time can enhance the stability of the disease state.
Supersonic jet noise - Its generation, prediction and effects on people and structures
NASA Technical Reports Server (NTRS)
Preisser, J. S.; Golub, R. A.; Seiner, J. M.; Powell, C. A.
1990-01-01
This paper presents the results of a study aimed at quantifying the effects of jet source noise reduction, increases in aircraft lift, and reduced aircraft thrust on the take-off noise associated with supersonic civil transports. Supersonic jet noise sources are first described, and their frequency and directivity dependence are defined. The study utilizes NASA's Aircraft Noise Prediction Program in a parametric study to weigh the relative benefits of several approaches to low noise. The baseline aircraft concept used in these predictions is the AST-205-1 powered by GE21/J11-B14A scaled engines. Noise assessment is presented in terms of effective perceived noise levels at the FAA's centerline and sideline measuring locations for current subsonic aircraft, and in terms of audiologically perceived sound of people and other indirect effects. The results show that significant noise benefit can be achieved through proper understanding and utilization of all available approaches.
Multiscale systems biology of trauma-induced coagulopathy.
Tsiklidis, Evan; Sims, Carrie; Sinno, Talid; Diamond, Scott L
2018-07-01
Trauma with hypovolemic shock is an extreme pathological state that challenges the body to maintain blood pressure and oxygenation in the face of hemorrhagic blood loss. In conjunction with surgical actions and transfusion therapy, survival requires the patient's blood to maintain hemostasis to stop bleeding. The physics of the problem are multiscale: (a) the systemic circulation sets the global blood pressure in response to blood loss and resuscitation therapy, (b) local tissue perfusion is altered by localized vasoregulatory mechanisms and bleeding, and (c) altered blood and vessel biology resulting from the trauma as well as local hemodynamics control the assembly of clotting components at the site of injury. Building upon ongoing modeling efforts to simulate arterial or venous thrombosis in a diseased vasculature, computer simulation of trauma-induced coagulopathy is an emerging approach to understand patient risk and predict response. Despite uncertainties in quantifying the patient's dynamic injury burden, multiscale systems biology may help link blood biochemistry at the molecular level to multiorgan responses in the bleeding patient. As an important goal of systems modeling, establishing early metrics of a patient's high-dimensional trajectory may help guide transfusion therapy or warn of subsequent later stage bleeding or thrombotic risks. This article is categorized under: Analytical and Computational Methods > Computational Methods Biological Mechanisms > Regulatory Biology Models of Systems Properties and Processes > Mechanistic Models. © 2018 Wiley Periodicals, Inc.
Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows
NASA Astrophysics Data System (ADS)
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.
2017-12-01
Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.
Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; da Silva, Carlos Alberto Aguiar; Valencia, Jose Fernando; Murta, Luiz Otavio; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto
2016-07-01
The analysis of heart rate variability (HRV) by nonlinear methods has been gaining increasing interest due to their ability to quantify the complexity of cardiovascular regulation. In this study, multiscale entropy (MSE) and refined MSE (RMSE) were applied to track the complexity of HRV as a function of time scale in three pathological conscious animal models: rats with heart failure (HF), spontaneously hypertensive rats (SHR), and rats with sinoaortic denervation (SAD). Results showed that HF did not change HRV complexity, although there was a tendency to decrease the entropy in HF animals. On the other hand, SHR group was characterized by reduced complexity at long time scales, whereas SAD animals exhibited a smaller short- and long-term irregularity. We propose that short time scales (1 to 4), accounting for fast oscillations, are more related to vagal and respiratory control, whereas long time scales (5 to 20), accounting for slow oscillations, are more related to sympathetic control. The increased sympathetic modulation is probably the main reason for the lower entropy observed at high scales for both SHR and SAD groups, acting as a negative factor for the cardiovascular complexity. This study highlights the contribution of the multiscale complexity analysis of HRV for understanding the physiological mechanisms involved in cardiovascular regulation. Copyright © 2016 the American Physiological Society.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
Performance prediction of a synchronization link for distributed aerospace wireless systems.
Wang, Wen-Qin; Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.
ERIC Educational Resources Information Center
George, Erwin L. J.; Goverts, S. Theo; Festen, Joost M.; Houtgast, Tammo
2010-01-01
Purpose: The Speech Transmission Index (STI; Houtgast, Steeneken, & Plomp, 1980; Steeneken & Houtgast, 1980) is commonly used to quantify the adverse effects of reverberation and stationary noise on speech intelligibility for normal-hearing listeners. Duquesnoy and Plomp (1980) showed that the STI can be applied for presbycusic listeners, relating…
Executive Summary of Systems Analysis to Develop Future Civil Aircraft Noise Reduction Alternatives.
1982-05-01
options in the unit cost analysis. These requirements were that the selected land use options should be applicable nationwide, be amenable to public ... adminstration , provide benefits in terms of direct reduction of noise exposure, and result in costs and benefits that could be readily quantifiable for use
Peng, Xian; Yuan, Han; Chen, Wufan; Ding, Lei
2017-01-01
Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic. PMID:28414803
Investigation of ultra low-dose scans in the context of quantum-counting clinical CT
NASA Astrophysics Data System (ADS)
Weidinger, T.; Buzug, T. M.; Flohr, T.; Fung, G. S. K.; Kappler, S.; Stierstorfer, K.; Tsui, B. M. W.
2012-03-01
In clinical computed tomography (CT), images from patient examinations taken with conventional scanners exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover, in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based, theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the improvement from image noise reduction achievable with quantum counting techniques in CT examinations with ultra-low X-ray dose and strong attenuation.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
Multi-Scale Structure and Earthquake Properties in the San Jacinto Fault Zone Area
NASA Astrophysics Data System (ADS)
Ben-Zion, Y.
2014-12-01
I review multi-scale multi-signal seismological results on structure and earthquake properties within and around the San Jacinto Fault Zone (SJFZ) in southern California. The results are based on data of the southern California and ANZA networks covering scales from a few km to over 100 km, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a dense rectangular array with >1100 vertical-component nodes separated by 10-30 m centered on the fault. The structural studies utilize earthquake data to image the seismogenic sections and ambient noise to image the shallower structures. The earthquake studies use waveform inversions and additional time domain and spectral methods. We observe pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios around the fault, and clear velocity contrasts across various sections. The damage zones and velocity contrasts produce fault zone trapped and head waves at various locations, along with time delays, anisotropy and other signals. The damage zones follow a flower-shape with depth; in places with velocity contrast they are offset to the stiffer side at depth as expected for bimaterial ruptures with persistent propagation direction. Analysis of PGV and PGA indicates clear persistent directivity at given fault sections and overall motion amplification within several km around the fault. Clear temporal changes of velocities, probably involving primarily the shallow material, are observed in response to seasonal, earthquake and other loadings. Full source tensor properties of M>4 earthquakes in the complex trifurcation area include statistically-robust small isotropic component, likely reflecting dynamic generation of rock damage in the source volumes. The dense fault zone instruments record seismic "noise" at frequencies >200 Hz that can be used for imaging and monitoring the shallow material with high space and time details, and numerous minute local earthquakes that contribute to the high frequency "noise". Updated results will be presented in the meeting. *The studies have been done in collaboration with Frank Vernon, Amir Allam, Dimitri Zigone, Zach Ross, Gregor Hillers, Ittai Kurzon, Michel Campillo, Philippe Roux, Lupei Zhu, Dan Hollis, Mitchell Barklage and others.
Strongly enhanced 1/f - noise level in {kappa}-(BEDT-TTF){sub 2}X salts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandenburg, J.; Muller, J.; Wirth, S.
2010-01-01
Fluctuation spectroscopy has been used as an investigative tool to understand the scattering mechanisms of carriers and their low-frequency dynamics in quasi-two-dimensional organic conductors ?-(BEDT-TTF)2X. We report on the very high noise level in these systems as determined from Hooge's empirical law to quantify 1/f-type noise in solids. The value of the Hooge parameter ?H, i.e. the normalized noise level, of 105-107 is several orders of magnitude higher than values of ?Hnot, vert, similar10-2-10-3 typically found in homogeneous metals and semiconductors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, S.; Brock, K. K.; Daly, M. J.
2009-10-15
Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion basedmore » on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.« less
Nithiananthan, S; Brock, K K; Daly, M J; Chan, H; Irish, J C; Siewerdsen, J H
2009-10-01
The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Using an open-source "symmetric" Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8+/-0.3) mm and NCC =0.99 in the cadaveric head compared to TRE=(2.6+/-1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6+/-0.9) mm compared to rigid registration TRE=(3.6+/-1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1 x 1 x 2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.
Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.
2009-01-01
Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source “symmetric” Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8±0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6±1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6±0.9) mm compared to rigid registration TRE=(3.6±1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1×1×2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies. PMID:19928106
Conservative tightly-coupled simulations of stochastic multiscale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2016-05-15
Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less
Kiviniemi, Vesa; Remes, Jukka; Starck, Tuomo; Nikkinen, Juha; Haapea, Marianne; Silven, Olli; Tervonen, Osmo
2009-01-01
Temporal blood oxygen level dependent (BOLD) contrast signals in functional MRI during rest may be characterized by power spectral distribution (PSD) trends of the form 1/f(alpha). Trends with 1/f characteristics comprise fractal properties with repeating oscillation patterns in multiple time scales. Estimates of the fractal properties enable the quantification of phenomena that may otherwise be difficult to measure, such as transient, non-linear changes. In this study it was hypothesized that the fractal metrics of 1/f BOLD signal trends can map changes related to dynamic, multi-scale alterations in cerebral blood flow (CBF) after a transient hyperventilation challenge. Twenty-three normal adults were imaged in a resting-state before and after hyperventilation. Different variables (1/f trend constant alpha, fractal dimension D(f), and, Hurst exponent H) characterizing the trends were measured from BOLD signals. The results show that fractal metrics of the BOLD signal follow the fractional Gaussian noise model, even during the dynamic CBF change that follows hyperventilation. The most dominant effect on the fractal metrics was detected in grey matter, in line with previous hyperventilation vaso-reactivity studies. The alpha was able to differentiate also blood vessels from grey matter changes. D(f) was most sensitive to grey matter. H correlated with default mode network areas before hyperventilation but this pattern vanished after hyperventilation due to a global increase in H. In the future, resting-state fMRI combined with fractal metrics of the BOLD signal may be used for analyzing multi-scale alterations of cerebral blood flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galan, Roberto F.; Urban, Nathaniel N.; Center for the Neural Basis of Cognition, Mellon Institute, Pittsburgh, Pennsylvania 15213
We have investigated the effect of the phase response curve on the dynamics of oscillators driven by noise in two limit cases that are especially relevant for neuroscience. Using the finite element method to solve the Fokker-Planck equation we have studied (i) the impact of noise on the regularity of the oscillations quantified as the coefficient of variation, (ii) stochastic synchronization of two uncoupled phase oscillators driven by correlated noise, and (iii) their cross-correlation function. We show that, in general, the limit of type II oscillators is more robust to noise and more efficient at synchronizing by correlated noise thanmore » type I.« less
Tip-path-plane angle effects on rotor blade-vortex interaction noise levels and directivity
NASA Technical Reports Server (NTRS)
Burley, Casey L.; Martin, Ruth M.
1988-01-01
Acoustic data of a scale model BO-105 main rotor acquired in a large aeroacoustic wind tunnel are presented to investigate the parametric effects of rotor operating conditions on blade-vortex interaction (BVI) impulsive noise. Contours of a BVI noise metric are employed to quantify the effects of rotor advance ratio and tip-path-plane angle on BVI noise directivity and amplitude. Acoustic time history data are presented to illustrate the variations in impulsive characteristics. The directionality, noise levels and impulsive content of both advancing and retreating side BVI are shown to vary significantly with tip-path-plane angle and advance ratio over the range of low and moderate flight speeds considered.
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
A temporal and spatial analysis of anthropogenic noise sources affecting SNMR
NASA Astrophysics Data System (ADS)
Dalgaard, E.; Christiansen, P.; Larsen, J. J.; Auken, E.
2014-11-01
One of the biggest challenges when using the surface nuclear magnetic resonance (SNMR) method in urban areas is a relatively low signal level compared to a high level of background noise. To understand the temporal and spatial behavior of anthropogenic noise sources like powerlines and electric fences, we have developed a multichannel instrument, noiseCollector (nC), which measures the full noise spectrum up to 10 kHz. Combined with advanced signal processing we can interpret the noise as seen by a SNMR instrument and also obtain insight into the more fundamental behavior of the noise. To obtain a specified acceptable noise level for a SNMR sounding the stack size can be determined by quantifying the different noise sources. Two common noise sources, electromagnetic fields stemming from powerlines and fences are analyzed and show a 1/r2 dependency in agreement with theoretical relations. A typical noise map, obtained with the nC instrument prior to a SNMR field campaign, clearly shows the location of noise sources, and thus we can efficiently determine the optimal location for the SNMR sounding from a noise perspective.
Gregg, Chelsea L; Recknagel, Andrew K; Butcher, Jonathan T
2015-01-01
Tissue morphogenesis and embryonic development are dynamic events challenging to quantify, especially considering the intricate events that happen simultaneously in different locations and time. Micro- and more recently nano-computed tomography (micro/nanoCT) has been used for the past 15 years to characterize large 3D fields of tortuous geometries at high spatial resolution. We and others have advanced micro/nanoCT imaging strategies for quantifying tissue- and organ-level fate changes throughout morphogenesis. Exogenous soft tissue contrast media enables visualization of vascular lumens and tissues via extravasation. Furthermore, the emergence of antigen-specific tissue contrast enables direct quantitative visualization of protein and mRNA expression. Micro-CT X-ray doses appear to be non-embryotoxic, enabling longitudinal imaging studies in live embryos. In this chapter we present established soft tissue contrast protocols for obtaining high-quality micro/nanoCT images and the image processing techniques useful for quantifying anatomical and physiological information from the data sets.
Quantifying climatic controls on river network topology across scales
NASA Astrophysics Data System (ADS)
Ranjbar Moshfeghi, S.; Hooshyar, M.; Wang, D.; Singh, A.
2017-12-01
Branching structure of river networks is an important topologic and geomorphologic feature that depends on several factors (e.g. climate, tectonic). However, mechanisms that cause these drainage patterns in river networks are poorly understood. In this study, we investigate the effects of varying climatic forcing on river network topology and geomorphology. For this, we select 20 catchments across the United States with different long-term climatic conditions quantified by climate aridity index (AI), defined here as the ratio of mean annual potential evaporation (Ep) to precipitation (P), capturing variation in runoff and vegetation cover. The river networks of these catchments are extracted, using a curvature-based method, from high-resolution (1 m) digital elevation models and several metrics such as drainage density, branching angle, and width functions are computed. We also use a multiscale-entropy-based approach to quantify the topologic irregularity and structural richness of these river networks. Our results reveal systematic impacts of climate forcing on the structure of river networks.
Brown, Judith A.; Bishop, Joseph E.
2016-07-20
An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less
Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature
NASA Astrophysics Data System (ADS)
Kwon, Minseok
While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system, the regulation of time constant update for filters in signal/control path as well as level-independent frequency glides with fixed frequency modulation. First, we scrutinized performance development in keyword recognition using the proposed methods in quiet and noise-corrupted environments. The results argue that multi-scale integration should be used along with CE in order to avoid ambiguous continuity in unvoiced segments. Moreover, the inclusion of the all modifications was observed to guarantee the noise-type-independent robustness particularly with severe interference. Moreover, the CASA with the auditory model was implemented into a single/dual-channel ASR using reference TIMIT corpus so as to get more general result. Hidden Markov model (HTK) toolkit was used for phone recognition in various environmental conditions. In a single-channel ASR, the results argue that unmasked acoustic features (unmasked GFCC) should combine with target estimates from the mask to compensate for missing information. From the observation of a dual-channel ASR, the combined GFCC guarantees the highest performance regardless of interferences within speech. Moreover, consistent improvement of noise robustness by GFCC (unmasked or combined) shows the validity of our proposed CASA implementation in dual microphone system. In conclusion, the proposed framework proves the robustness of the acoustic features in various background interferences via both direct distance evaluation and statistical assessment. In addition, the introduction of dual microphone system using the framework in this study shows the potential of the effective implementation of the auditory model-based CASA in ASR.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Hybrid Wing Body Configuration Scaling Study
NASA Technical Reports Server (NTRS)
Nickol, Craig L.
2012-01-01
The Hybrid Wing Body (HWB) configuration is a subsonic transport aircraft concept with the potential to simultaneously reduce fuel burn, noise and emissions compared to conventional concepts. Initial studies focused on very large applications with capacities for up to 800 passengers. More recent studies have focused on the large, twin-aisle class with passenger capacities in the 300-450 range. Efficiently scaling this concept down to the single aisle or smaller size is challenging due to geometric constraints, potentially reducing the desirability of this concept for applications in the 100-200 passenger capacity range or less. In order to quantify this scaling challenge, five advanced conventional (tube-and-wing layout) concepts were developed, along with equivalent (payload/range/technology) HWB concepts, and their fuel burn performance compared. The comparison showed that the HWB concepts have fuel burn advantages over advanced tube-and-wing concepts in the larger payload/range classes (roughly 767-sized and larger). Although noise performance was not quantified in this study, the HWB concept has distinct noise advantages over the conventional tube-and-wing configuration due to the inherent noise shielding features of the HWB. NASA s Environmentally Responsible Aviation (ERA) project will continue to investigate advanced configurations, such as the HWB, due to their potential to simultaneously reduce fuel burn, noise and emissions.
An evaluation of helicopter noise and vibration ride qualities criteria
NASA Technical Reports Server (NTRS)
Hammond, C. E.; Hollenbaugh, D. D.; Clevenson, S. A.; Leatherwood, J. D.
1981-01-01
Two methods of quantifying helicopter ride quality; absorbed power for vibration only and the NASA ride comfort model for both noise and vibration are discussed. Noise and vibration measurements were obtained on five operational US Army helicopters. The data were converted to both absorbed power and DISC's (discomfort units used in the NASA model) for specific helicopter flight conditions. Both models indicate considerable variation in ride quality between the five helicopters and between flight conditions within each helicopter.
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu
Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
A note on convergence of solutions of total variation regularized linear inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar
2018-05-01
In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Benefits of Swept and Leaned Stators for Fan Noise Reduction
NASA Technical Reports Server (NTRS)
Woodward, Richard P.; Elliott, David M.; Hughes, Christopher E.; Berton, Jeffrey J.
1998-01-01
An advanced high bypass ratio fan model was tested in the NASA Lewis Research Center 9 x 15-Foot Low Speed Wind Tunnel. The primary focus of this test was to quantify the acoustic benefits and aerodynamic performance of sweep and lean in stator vane design. Three stator sets were used for this test series. A conventional radial stator was tested at two rotor-stator axial spacings. Additional stator sets incorporating sweep + lean, and sweep only were also tested. The hub axial location for the swept + lean, and sweep only stators corresponded to the location of the radial stator at the upstream rotor-stator spacing, while the tip axial location of these modified stators corresponded to the radial stator axial position at the downstream position. The acoustic results show significant reductions in both rotor-stator interaction noise and broadband noise beyond what could be achieved through increased axial spacing of the conventional, radial stator. Theoretical application of these results to acoustically quantify a fictitious 2-engine aircraft and flight path suggested that about 3 Effective Perceived Noise (EPN) dB could be achieved through incorporation of these modified stators. This reduction would represent a significant portion of the 6 EPNdB noise goal of the current NASA Advanced Subsonic Technology (AST) initiative relative to that of 1992 technology levels. A secondary result of this fan test was to demonstrate the ability of an acoustic barrier wall to block aft-radiated fan noise in the wind tunnel, thus revealing the acoustic structure of the residual inlet-radiated noise. This technology should prove valuable toward better understanding inlet liner design, or wherever it is desirable to eliminate aft-radiated noise from the fan acoustic signature.
A Hybrid Multiscale Framework for Subsurface Flow and Transport Simulations
Scheibe, Timothy D.; Yang, Xiaofan; Chen, Xingyuan; ...
2015-06-01
Extensive research efforts have been invested in reducing model errors to improve the predictive ability of biogeochemical earth and environmental system simulators, with applications ranging from contaminant transport and remediation to impacts of biogeochemical elemental cycling (e.g., carbon and nitrogen) on local ecosystems and regional to global climate. While the bulk of this research has focused on improving model parameterizations in the face of observational limitations, the more challenging type of model error/uncertainty to identify and quantify is model structural error which arises from incorrect mathematical representations of (or failure to consider) important physical, chemical, or biological processes, properties, ormore » system states in model formulations. While improved process understanding can be achieved through scientific study, such understanding is usually developed at small scales. Process-based numerical models are typically designed for a particular characteristic length and time scale. For application-relevant scales, it is generally necessary to introduce approximations and empirical parameterizations to describe complex systems or processes. This single-scale approach has been the best available to date because of limited understanding of process coupling combined with practical limitations on system characterization and computation. While computational power is increasing significantly and our understanding of biological and environmental processes at fundamental scales is accelerating, using this information to advance our knowledge of the larger system behavior requires the development of multiscale simulators. Accordingly there has been much recent interest in novel multiscale methods in which microscale and macroscale models are explicitly coupled in a single hybrid multiscale simulation. A limited number of hybrid multiscale simulations have been developed for biogeochemical earth systems, but they mostly utilize application-specific and sometimes ad-hoc approaches for model coupling. We are developing a generalized approach to hierarchical model coupling designed for high-performance computational systems, based on the Swift computing workflow framework. In this presentation we will describe the generalized approach and provide two use cases: 1) simulation of a mixing-controlled biogeochemical reaction coupling pore- and continuum-scale models, and 2) simulation of biogeochemical impacts of groundwater – river water interactions coupling fine- and coarse-grid model representations. This generalized framework can be customized for use with any pair of linked models (microscale and macroscale) with minimal intrusiveness to the at-scale simulators. It combines a set of python scripts with the Swift workflow environment to execute a complex multiscale simulation utilizing an approach similar to the well-known Heterogeneous Multiscale Method. User customization is facilitated through user-provided input and output file templates and processing function scripts, and execution within a high-performance computing environment is handled by Swift, such that minimal to no user modification of at-scale codes is required.« less
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
NASA Astrophysics Data System (ADS)
Park, Ju Hyuk; Yang, Sei Hyun; Lee, Hyeong Rae; Yu, Cheng Bin; Pak, Seong Yeol; Oh, Chi Sung; Kang, Yeon June; Youn, Jae Ryoun
2017-06-01
Sound absorption of a polyurethane (PU) foam was predicted for various geometries to fabricate the optimum microstructure of a sound absorbing foam. Multiscale numerical analysis for sound absorption was carried out by solving flow problems in representative unit cell (RUC) and the pressure acoustics equation using Johnson-Champoux-Allard (JCA) model. From the numerical analysis, theoretical optimum cell diameter for low frequency sound absorption was evaluated in the vicinity of 400 μm under the condition of 2 cm-80 K (thickness of 2 cm and density of 80 kg/m3) foam. An ultrasonic foaming method was employed to modulate microcellular structure of PU foam. Mechanical activation was only employed to manipulate the internal structure of PU foam without any other treatment. A mean cell diameter of PU foam was gradually decreased with increase in the amplitude of ultrasonic waves. It was empirically found that the reduction of mean cell diameter induced by the ultrasonic wave enhances acoustic damping efficiency in low frequency ranges. Moreover, further analyses were performed with several acoustic evaluation factors; root mean square (RMS) values, noise reduction coefficients (NRC), and 1/3 octave band spectrograms.
Binegativity of two qubits under noise
NASA Astrophysics Data System (ADS)
Sazim, Sk; Awasthi, Natasha
2018-07-01
Recently, it was argued that the binegativity might be a good quantifier of entanglement for two-qubit states. Like the concurrence and the negativity, the binegativity is also analytically computable quantifier for all two qubits. Based on numerical evidence, it was conjectured that it is a PPT (positive partial transposition) monotone and thus fulfills the criterion to be a good measure of entanglement. In this work, we investigate its behavior under noisy channels which indicate that the binegativity is decreasing monotonically with respect to increasing noise. We also find that the binegativity is closely connected to the negativity and has closed analytical form for arbitrary two qubits. Our study supports the conjecture that the binegativity is a monotone.
NASA Technical Reports Server (NTRS)
Clevenson, S. A.; Leatherwood, J. D.
1979-01-01
The effects of helicopter interior noise on passenger annoyance were studied. Both reverie and listening situations were studied as well as the relative effectiveness of several descriptors (i.e., overall sound pressure level, A-weighted sound pressure level, and speech interference level) for quantifying annoyance response for these situations. The noise stimuli were based upon recordings of the interior noise of a civil helicopter research aircraft. These noises were presented at levels ranging from approximately 68 to 86 dB(A) with various gear clash tones selectively attenuated to give a range of spectra. Results indicated that annoyance during a listening condition is generally higher than annoyance during a reverie condition for corresponding interior noise environments. Attenuation of the planetary gear clash tone results in increases in listening performance but has negligible effect upon annoyance for a given noise level. The noise descriptor most effective for estimating annoyance response under conditions of reverie and listening situations is shown to be the A-weighted sound pressure level.
Combined Effects of High-Speed Railway Noise and Ground Vibrations on Annoyance.
Yokoshima, Shigenori; Morihara, Takashi; Sato, Tetsumi; Yano, Takashi
2017-07-27
The Shinkansen super-express railway system in Japan has greatly increased its capacity and has expanded nationwide. However, many inhabitants in areas along the railways have been disturbed by noise and ground vibration from the trains. Additionally, the Shinkansen railway emits a higher level of ground vibration than conventional railways at the same noise level. These findings imply that building vibrations affect living environments as significantly as the associated noise. Therefore, it is imperative to quantify the effects of noise and vibration exposures on each annoyance under simultaneous exposure. We performed a secondary analysis using individual datasets of exposure and community response associated with Shinkansen railway noise and vibration. The data consisted of six socio-acoustic surveys, which were conducted separately over the last 20 years in Japan. Applying a logistic regression analysis to the datasets, we confirmed the combined effects of vibration/noise exposure on noise/vibration annoyance. Moreover, we proposed a representative relationship between noise and vibration exposures, and the prevalence of each annoyance associated with the Shinkansen railway.
Physical and subjective studies of aircraft interior noise and vibration
NASA Technical Reports Server (NTRS)
Stephens, D. G.; Leatherwood, J. D.
1979-01-01
Measurements to define and quantify the interior noise and vibration stimuli of aircraft are reviewed as well as field and simulation studies to determine the subjective response to such stimuli, and theoretical and experimental studies to predict and control the interior environment. In addition, ride quality criteria/standards for noise, vibration, and combinations of these stimuli are discussed in relation to the helicopter cabin environment. Data on passenger response are presented to illustrate the effects of interior noise and vibration on speech intelligibility and comfort of crew and passengers. The interactive effects of noise with multifrequency and multiaxis vibration are illustrated by data from LaRC ride quality simulator. Constant comfort contours for various combinations of noise and vibration are presented and the incorporation of these results into a user-oriented model are discussed. With respect to aircraft interior noise and vibration control, ongoing studies to define the near-field noise, the transmission of noise through the structure, and the effectiveness of control treatments are described.
Multi-scale functional mapping of tidal marsh vegetation for restoration monitoring
NASA Astrophysics Data System (ADS)
Tuxen Bettman, Karin
2007-12-01
Nearly half of the world's natural wetlands have been destroyed or degraded, and in recent years, there have been significant endeavors to restore wetland habitat throughout the world. Detailed mapping of restoring wetlands can offer valuable information about changes in vegetation and geomorphology, which can inform the restoration process and ultimately help to improve chances of restoration success. I studied six tidal marshes in the San Francisco Estuary, CA, US, between 2003 and 2004 in order to develop techniques for mapping tidal marshes at multiple scales by incorporating specific restoration objectives for improved longer term monitoring. I explored a "pixel-based" remote sensing image analysis method for mapping vegetation in restored and natural tidal marshes, describing the benefits and limitations of this type of approach (Chapter 2). I also performed a multi-scale analysis of vegetation pattern metrics for a recently restored tidal marsh in order to target the metrics that are consistent across scales and will be robust measures of marsh vegetation change (Chapter 3). Finally, I performed an "object-based" image analysis using the same remotely sensed imagery, which maps vegetation type and specific wetland functions at multiple scales (Chapter 4). The combined results of my work highlight important trends and management implications for monitoring wetland restoration using remote sensing, and will better enable restoration ecologists to use remote sensing for tidal marsh monitoring. Several findings important for tidal marsh restoration monitoring were made. Overall results showed that pixel-based methods are effective at quantifying landscape changes in composition and diversity in recently restored marshes, but are limited in their use for quantifying smaller, more fine-scale changes. While pattern metrics can highlight small but important changes in vegetation composition and configuration across years, scientists should exercise caution when using metrics in their studies or to validate restoration management decisions, and multi-scale analyses should be performed before metrics are used in restoration science for important management decisions. Lastly, restoration objectives, ecosystem function, and scale can each be integrated into monitoring techniques using remote sensing for improved restoration monitoring.
Li, Qing; Qiao, Fengxiang; Yu, Lei; Shi, Junqing
2018-06-01
Vehicle interior noise functions at the dominant frequencies of 500 Hz below and around 800 Hz, which fall into the bands that may impair hearing. Recent studies demonstrated that freeway commuters are chronically exposed to vehicle interior noise, bearing the risk of hearing impairment. The interior noise evaluation process is mostly conducted in a laboratory environment. The test results and the developed noise models may underestimate or ignore the noise effects from dynamic traffic and road conditions and configuration. However, the interior noise is highly associated with vehicle maneuvering. The vehicle maneuvering on a freeway weaving segment is more complex because of its nature of conflicting areas. This research is intended to explore the risk of the interior noise exposure on freeway weaving segments for freeway commuters and to improve the interior noise estimation by constructing a decision tree learning-based noise exposure dose (NED) model, considering weaving segment designs and engine operation. On-road driving tests were conducted on 12 subjects on State Highway 288 in Houston, Texas. On-board Diagnosis (OBD) II, a smartphone-based roughness app, and a digital sound meter were used to collect vehicle maneuvering and engine information, International Roughness Index, and interior noise levels, respectively. Eleven variables were obtainable from the driving tests, including the length and type of a weaving segment, serving as predictors. The importance of the predictors was estimated by their out-of-bag-permuted predictor delta errors. The hazardous exposure level of the interior noise on weaving segments was quantified to hazard quotient, NED, and daily noise exposure level, respectively. Results showed that the risk of hearing impairment on freeway is acceptable; the interior noise level is the most sensitive to the pavement roughness and is subject to freeway configuration and traffic conditions. The constructed NED model shows high predictive power (R = 0.93, normalized root-mean-square error [NRMSE] < 6.7%). Vehicle interior noise is usually ignored in the public, and its modeling and evaluation are generally conducted in a laboratory environment, regardless of the interior noise effects from dynamic traffic, road conditions, and road configuration. This study quantified the interior exposure dose on freeway weaving segments, which provides freeway commuters with a sense of interior noise exposure risk. In addition, a bagged decision tree-based interior noise exposure dose model was constructed, considering vehicle maneuvering, vehicle engine operational information, pavement roughness, and weaving segment configuration. The constructed model could significantly improve the interior noise estimation for road engineers and vehicle manufactures.
NASA Astrophysics Data System (ADS)
Tong, S.; Alessio, A. M.; Kinahan, P. E.
2010-03-01
The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in contrast recovery at matched ensemble noise levels and 20% improvement of quantitation SNR in unfiltered data. In addition, the relations between different metrics are studied. A linear correlation is observed between background variability and ensemble noise for all different combinations of reconstruction methods and parameters, suggesting that background variability is a reasonable surrogate for ensemble noise when multiple realizations of scans are not available.
Spatial separation benefit for unaided and aided listening
Ahlstrom, Jayne B.; Horwitz, Amy R.; Dubno, Judy R.
2013-01-01
Consonant recognition in noise was measured at a fixed signal-to-noise ratio as a function of low-pass-cutoff frequency and noise location in older adults fit with bilateral hearing aids. To quantify age-related differences, spatial benefit was assessed in younger and older adults with normal hearing. Spatial benefit was similar for all groups suggesting that older adults used interaural difference cues to improve speech recognition in noise equivalently to younger adults. Although amplification was sufficient to increase high-frequency audibility with spatial separation, hearing-aid benefit was minimal, suggesting that factors beyond simple audibility may be responsible for limited hearing-aid benefit. PMID:24121648
Low-frequency noise effect on terahertz tomography using thermal detectors.
Guillet, J P; Recur, B; Balacey, H; Bou Sleiman, J; Darracq, F; Lewis, D; Mounaix, P
2015-08-01
In this paper, the impact of low-frequency noise on terahertz-computed tomography (THz-CT) is analyzed for several measurement configurations and pyroelectric detectors. We acquire real noise data from a continuous millimeter-wave tomographic scanner in order to figure out its impact on reconstructed images. Second, noise characteristics are quantified according to two distinct acquisition methods by (i) extrapolating from experimental acquisitions a sinogram for different noise backgrounds and (ii) reconstructing the corresponding spatial distributions in a slice using a CT reconstruction algorithm. Then we describe the low-frequency noise fingerprint and its influence on reconstructed images. Thanks to the observations, we demonstrate that some experimental choices can dramatically affect the 3D rendering of reconstructions. Thus, we propose some experimental methodologies optimizing the resulting quality and accuracy of the 3D reconstructions, with respect to the low-frequency noise characteristics observed during acquisitions.
Noise spectra in balanced optical detectors based on transimpedance amplifiers.
Masalov, A V; Kuzhamuratov, A; Lvovsky, A I
2017-11-01
We present a thorough theoretical analysis and experimental study of the shot and electronic noise spectra of a balanced optical detector based on an operational amplifier connected in a transimpedance scheme. We identify and quantify the primary parameters responsible for the limitations of the circuit, in particular, the bandwidth and shot-to-electronic noise clearance. We find that the shot noise spectrum can be made consistent with the second-order Butterworth filter, while the electronic noise grows linearly with the second power of the frequency. Good agreement between the theory and experiment is observed; however, the capacitances of the operational amplifier input and the photodiodes appear significantly higher than those specified in manufacturers' datasheets. This observation is confirmed by independent tests.
Noise spectra in balanced optical detectors based on transimpedance amplifiers
NASA Astrophysics Data System (ADS)
Masalov, A. V.; Kuzhamuratov, A.; Lvovsky, A. I.
2017-11-01
We present a thorough theoretical analysis and experimental study of the shot and electronic noise spectra of a balanced optical detector based on an operational amplifier connected in a transimpedance scheme. We identify and quantify the primary parameters responsible for the limitations of the circuit, in particular, the bandwidth and shot-to-electronic noise clearance. We find that the shot noise spectrum can be made consistent with the second-order Butterworth filter, while the electronic noise grows linearly with the second power of the frequency. Good agreement between the theory and experiment is observed; however, the capacitances of the operational amplifier input and the photodiodes appear significantly higher than those specified in manufacturers' datasheets. This observation is confirmed by independent tests.
NASA Technical Reports Server (NTRS)
Hayden, R. E.; Wilby, J. F.
1984-01-01
NASA is investigating the feasibility of modifying the 4x7m Wind Tunnel at the Langley Research Center to make it suitable for a variety of aeroacoustic testing applications, most notably model helicopter rotors. The amount of noise reduction required to meet NASA's goal for test section background noise was determined, the predominant sources and paths causing the background noise were quantified, and trade-off studies between schemes to reduce fan noise at the source and those to attenuate the sound generated in the circuit between the sources and the test section were carried out. An extensive data base is also presented on circuit sources and paths.
Music-based magnetic resonance fingerprinting to improve patient comfort during MRI examinations.
Ma, Dan; Pierre, Eric Y; Jiang, Yun; Schluchter, Mark D; Setsompop, Kawin; Gulani, Vikas; Griswold, Mark A
2016-06-01
Unpleasant acoustic noise is a drawback of almost every MRI scan. Instead of reducing acoustic noise to improve patient comfort, we propose a technique for mitigating the noise problem by producing musical sounds directly from the switching magnetic fields while simultaneously quantifying multiple important tissue properties. MP3 music files were converted to arbitrary encoding gradients, which were then used with varying flip angles and repetition times in a two- and three-dimensional magnetic resonance fingerprinting (MRF) examination. This new acquisition method, named MRF-Music, was used to quantify T1 , T2 , and proton density maps simultaneously while providing pleasing sounds to the patients. MRF-Music scans improved patient comfort significantly during MRI examinations. The T1 and T2 values measured from phantom are in good agreement with those from the standard spin echo measurements. T1 and T2 values from the brain scan are also close to previously reported values. MRF-Music sequence provides significant improvement in patient comfort compared with the MRF scan and other fast imaging techniques such as echo planar imaging and turbo spin echo scans. It is also a fast and accurate quantitative method that quantifies multiple relaxation parameters simultaneously. Magn Reson Med 75:2303-2314, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Music-Based Magnetic Resonance Fingerprinting to Improve Patient Comfort During MRI Exams
Ma, Dan; Pierre, Eric Y.; Jiang, Yun; Schluchter, Mark D.; Setsompop, Kawin; Gulani, Vikas; Griswold, Mark A.
2015-01-01
Purpose The unpleasant acoustic noise is an important drawback of almost every magnetic resonance imaging scan. Instead of reducing the acoustic noise to improve patient comfort, a method is proposed to mitigate the noise problem by producing musical sounds directly from the switching magnetic fields while simultaneously quantifying multiple important tissue properties. Theory and Methods MP3 music files were converted to arbitrary encoding gradients, which were then used with varying flip angles and TRs in both 2D and 3D MRF exam. This new acquisition method named MRF-Music was used to quantify T1, T2 and proton density maps simultaneously while providing pleasing sounds to the patients. Results The MRF-Music scans were shown to significantly improve the patients' comfort during the MRI scans. The T1 and T2 values measured from phantom are in good agreement with those from the standard spin echo measurements. T1 and T2 values from the brain scan are also close to previously reported values. Conclusions MRF-Music sequence provides significant improvement of the patient's comfort as compared to the MRF scan and other fast imaging techniques such as EPI and TSE scans. It is also a fast and accurate quantitative method that quantifies multiple relaxation parameter simultaneously. PMID:26178439
Towards a supported common NEAMS software stack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormac Garvey
2012-04-01
The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less
Hingerl, Ferdinand F.; Yang, Feifei; Pini, Ronny; ...
2016-02-02
In this paper we present the results of an extensive multiscale characterization of the flow properties and structural and capillary heterogeneities of the Heletz sandstone. We performed petrographic, porosity and capillary pressure measurements on several subsamples. We quantified mm-scale heterogeneity in saturation distributions in a rock core during multi-phase flow using conventional X-ray CT scanning. Core-flooding experiments were conducted under reservoirs conditions (9 MPa, 50 °C) to obtain primary drainage and secondary imbibition relative permeabilities and residual trapping was analyzed and quantified. We provide parameters for relative permeability, capillary pressure and trapping models for further modeling studies. A synchrotron-based microtomographymore » study complements our cm- to mm-scale investigation by providing links between the micromorphology and mm-scale saturation heterogeneities.« less
Shin, Hyunjin; Mutlu, Miray; Koomen, John M.; Markey, Mia K.
2007-01-01
Noise in mass spectrometry can interfere with identification of the biochemical substances in the sample. For example, the electric motors and circuits inside the mass spectrometer or in nearby equipment generate random noise that may distort the true shape of mass spectra. This paper presents a stochastic signal processing approach to analyzing noise from electrical noise sources (i.e., noise from instrumentation) in MALDI TOF mass spectrometry. Noise from instrumentation was hypothesized to be a mixture of thermal noise, 1/f noise, and electric or magnetic interference in the instrument. Parametric power spectral density estimation was conducted to derive the power distribution of noise from instrumentation with respect to frequencies. As expected, the experimental results show that noise from instrumentation contains 1/f noise and prominent periodic components in addition to thermal noise. These periodic components imply that the mass spectrometers used in this study may not be completely shielded from the internal or external electrical noise sources. However, according to a simulation study of human plasma mass spectra, noise from instrumentation does not seem to affect mass spectra significantly. In conclusion, analysis of noise from instrumentation using stochastic signal processing here provides an intuitive perspective on how to quantify noise in mass spectrometry through spectral modeling. PMID:19455245
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
Toward a global multi-scale heliophysics observatory
NASA Astrophysics Data System (ADS)
Semeter, J. L.
2017-12-01
We live within the only known stellar-planetary system that supports life. What we learn about this system is not only relevant to human society and its expanding reach beyond Earth's surface, but also to our understanding of the origins and evolution of life in the universe. Heliophysics is focused on solar-terrestrial interactions mediated by the magnetic and plasma environment surrounding the planet. A defining feature of energy flow through this environment is interaction across physical scales. A solar disturbance aimed at Earth can excite geospace variability on scales ranging from thousands of kilometers (e.g., global convection, region 1 and 2 currents, electrojet intensifications) to 10's of meters (e.g., equatorial spread-F, dispersive Alfven waves, plasma instabilities). Most "geospace observatory" concepts are focused on a single modality (e.g., HF/UHF radar, magnetometer, optical) providing a limited parameter set over a particular spatiotemporal resolution. Data assimilation methods have been developed to couple heterogeneous and distributed observations, but resolution has typically been prescribed a-priori and according to physical assumptions. This paper develops a conceptual framework for the next generation multi-scale heliophysics observatory, capable of revealing and quantifying the complete spectrum of cross-scale interactions occurring globally within the geospace system. The envisioned concept leverages existing assets, enlists citizen scientists, and exploits low-cost access to the geospace environment. Examples are presented where distributed multi-scale observations have resulted in substantial new insight into the inner workings of our stellar-planetary system.
The Structure of Borders in a Small World
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970
The structure of borders in a small world.
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-11-18
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
Noise induced aperiodic rotations of particles trapped by a non-conservative force
NASA Astrophysics Data System (ADS)
Ortega-Piwonka, Ignacio; Angstmann, Christopher N.; Henry, Bruce I.; Reece, Peter J.
2018-04-01
We describe a mechanism whereby random noise can play a constructive role in the manifestation of a pattern, aperiodic rotations, that would otherwise be damped by internal dynamics. The mechanism is described physically in a theoretical model of overdamped particle motion in two dimensions with symmetric damping and a non-conservative force field driven by noise. Cyclic motion only occurs as a result of stochastic noise in this system. However, the persistence of the cyclic motion is quantified by parameters associated with the non-conservative forcing. Unlike stochastic resonance or coherence resonance, where noise can play a constructive role in amplifying a signal that is otherwise below the threshold for detection, in the mechanism considered here, the signal that is detected does not exist without the noise. Moreover, the system described here is a linear system.
Noise exposure in convertible automobiles.
Mikulec, A A; Lukens, S B; Jackson, L E; Deyoung, M N
2011-02-01
To quantify the noise exposure received while driving a convertible automobile with the top open, compared with the top closed. Five different convertible automobiles were driven, with the top both closed and open, and noise levels measured. The cars were tested at speeds of 88.5, 104.6 and 120.7 km/h. When driving with the convertible top open, the mean noise exposure ranged from 85.3 dB at 88.5 km/h to 89.9 dB at 120.7 km/h. At the tested speeds, noise exposure increased by an average of 12.4-14.6 dB after opening the convertible top. Driving convertible automobiles at speeds exceeding 88.5 km/h, with the top open, may result in noise exposure levels exceeding recommended limits, especially when driving with the convertible top open for prolonged periods.
Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems
Shao, Huaizong
2013-01-01
For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
Assessment of noise environment during construction of a major bridge and associated approach road.
Roy, T K; Mukhopadhyay, A R; Ghosh, S K; Majumder, G
2011-10-01
In this paper a methodology to quantify the noise environment, during a major bridge construction and upgrading approach road connectivity, has been provided. Noise levels were monitored at eleven sites. These eleven sites have been classified into three categories - commercial, residential and silence zones. The study was carried out to measure the ambient noise levels in all the eleven sites falling in the above three categories during both day and night times considering both "working" and "non-working" days. It was found that the mean noise level during night time was more, compared to that during day time for commercial, residential as well as silence zones. The likely causes of more noise during night time have been explored. Appropriate remedial measures have been suggested to reduce the noise levels. In addition, the noise levels in the above three zones have been compared, wherever feasible statistically, with the respective zonal standards. Significance has been found in all the cases. The underlying causes and remedies have been provided.
Chenxi, Li; Chen, Yanni; Li, Youjun; Wang, Jue; Liu, Tian
2016-06-01
The multiscale entropy (MSE) is a novel method for quantifying the intrinsic dynamical complexity of physiological systems over several scales. To evaluate this method as a promising way to explore the neural mechanisms in ADHD, we calculated the MSE in EEG activity during the designed task. EEG data were collected from 13 outpatient boys with a confirmed diagnosis of ADHD and 13 age- and gender-matched normal control children during their doing multi-source interference task (MSIT). We estimated the MSE by calculating the sample entropy values of delta, theta, alpha and beta frequency bands over twenty time scales using coarse-grained procedure. The results showed increased complexity of EEG data in delta and theta frequency bands and decreased complexity in alpha frequency bands in ADHD children. The findings of this study revealed aberrant neural connectivity of kids with ADHD during interference task. The results showed that MSE method may be a new index to identify and understand the neural mechanism of ADHD. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantifying the deformation of the red blood cell skeleton in shear flow
NASA Astrophysics Data System (ADS)
Peng, Zhangli; Zhu, Qiang
2012-02-01
To quantitatively predict the response of red blood cell (RBC) membrane in shear flow, we carried out multiphysics simulations by coupling a three-level multiscale approach of RBC membranes with a Boundary Element Method (BEM) for surrounding flows. Our multiscale approach includes a model of spectrins with the domain unfolding feature, a molecular-based model of the junctional complex with detailed protein connectivity and a whole cell Finite Element Method (FEM) model with the bilayer-skeleton friction derived from measured transmembrane protein diffusivity based on the Einstein-Stokes relation. Applying this approach, we investigated the bilayer-skeleton slip and skeleton deformation of healthy RBCs and RBCs with hereditary spherocytosis anemia during tank-treading motion. Compared with healthy cells, cells with hereditary spherocytosis anemia sustain much larger skeleton-bilayer slip and area deformation of the skeleton due to deficiency of transmembrane proteins. This leads to extremely low skeleton density and large bilayer-skeleton interaction force, both of which may cause bilayer loss. This finding suggests a possible mechanism of the development of hereditary spherocytosis anemia.
Multiscale Analysis of Head Impacts in Contact Sports
NASA Astrophysics Data System (ADS)
Guttag, Mark; Sett, Subham; Franck, Jennifer; McNamara, Kyle; Bar-Kochba, Eyal; Crisco, Joseph; Blume, Janet; Franck, Christian
2012-02-01
Traumatic brain injury (TBI) is one of the world's major causes of death and disability. To aid companies in designing safer and improved protective gear and to aid the medical community in producing improved quantitative TBI diagnosis and assessment tools, a multiscale finite element model of the human brain, head and neck is being developed. Recorded impact data from football and hockey helmets instrumented with accelerometers are compared to simulated impact data in the laboratory. Using data from these carefully constructed laboratory experiments, we can quantify impact location, magnitude, and linear and angular accelerations of the head. The resultant forces and accelerations are applied to a fully meshed head-form created from MRI data by Simpleware. With appropriate material properties for each region of the head-form, the Abaqus finite element model can determine the stresses, strains, and deformations in the brain. Simultaneously, an in-vitro cellular TBI criterion is being developed to be incorporated into Abaqus models for the brain. The cell-based injury criterion functions the same way that damage criteria for metals and other materials are used to predict failure in structural materials.
Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2015-01-01
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024
Multiscale estimation of excess mass from gravity data
NASA Astrophysics Data System (ADS)
Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni
2014-06-01
We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.
NASA Astrophysics Data System (ADS)
Gao, Zhiyun; Holtze, Colin; Sonka, Milan; Hoffman, Eric; Saha, Punam K.
2010-03-01
Distinguishing pulmonary arterial and venous (A/V) trees via in vivo imaging is a critical first step in the quantification of vascular geometry for purposes of determining, for instance, pulmonary hypertension, detection of pulmonary emboli and more. A multi-scale topo-morphologic opening algorithm has recently been introduced by us separating A/V trees in pulmonary multiple-detector X-ray computed tomography (MDCT) images without contrast. The method starts with two sets of seeds - one for each of A/V trees and combines fuzzy distance transform, fuzzy connectivity, and morphologic reconstruction leading to multi-scale opening of two mutually fused structures while preserving their continuity. The method locally determines the optimum morphological scale separating the two structures. Here, a validation study is reported examining accuracy of the method using mathematically generated phantoms with different levels of fuzziness, overlap, scale, resolution, noise, and geometric coupling and MDCT images of pulmonary vessel casting of pigs. After exsanguinating the animal, a vessel cast was generated using rapid-hardening methyl methacrylate compound with additional contrast by 10cc of Ethiodol in the arterial side which was scanned in a MDCT scanner at 0.5mm slice thickness and 0.47mm in plane resolution. True segmentations of A/V trees were computed from these images by thresholding. Subsequently, effects of distinguishing A/V contrasts were eliminated and resulting images were used for A/V separation by our method. Experimental results show that 92% - 98% accuracy is achieved using only one seed for each object in phantoms while 94.4% accuracy is achieved in MDCT cast images using ten seeds for each of A/V trees.
NASA Astrophysics Data System (ADS)
Adarsh, S.; Reddy, M. Janga
2017-07-01
In this paper, the Hilbert-Huang transform (HHT) approach is used for the multiscale characterization of All India Summer Monsoon Rainfall (AISMR) time series and monsoon rainfall time series from five homogeneous regions in India. The study employs the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) for multiscale decomposition of monsoon rainfall in India and uses the Normalized Hilbert Transform and Direct Quadrature (NHT-DQ) scheme for the time-frequency characterization. The cross-correlation analysis between orthogonal modes of All India monthly monsoon rainfall time series and that of five climate indices such as Quasi Biennial Oscillation (QBO), El Niño Southern Oscillation (ENSO), Sunspot Number (SN), Atlantic Multi Decadal Oscillation (AMO), and Equatorial Indian Ocean Oscillation (EQUINOO) in the time domain showed that the links of different climate indices with monsoon rainfall are expressed well only for few low-frequency modes and for the trend component. Furthermore, this paper investigated the hydro-climatic teleconnection of ISMR in multiple time scales using the HHT-based running correlation analysis technique called time-dependent intrinsic correlation (TDIC). The results showed that both the strength and nature of association between different climate indices and ISMR vary with time scale. Stemming from this finding, a methodology employing Multivariate extension of EMD and Stepwise Linear Regression (MEMD-SLR) is proposed for prediction of monsoon rainfall in India. The proposed MEMD-SLR method clearly exhibited superior performance over the IMD operational forecast, M5 Model Tree (MT), and multiple linear regression methods in ISMR predictions and displayed excellent predictive skill during 1989-2012 including the four extreme events that have occurred during this period.
Modelling strategies to predict the multi-scale effects of rural land management change
NASA Astrophysics Data System (ADS)
Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.; Marshall, M.; Reynolds, B.; Wheater, H. S.
2011-12-01
Changes to the rural landscape due to agricultural land management are ubiquitous, yet predicting the multi-scale effects of land management change on hydrological response remains an important scientific challenge. Much empirical research has been of little generic value due to inadequate design and funding of monitoring programmes, while the modelling issues challenge the capability of data-based, conceptual and physics-based modelling approaches. In this paper we report on a major UK research programme, motivated by a national need to quantify effects of agricultural intensification on flood risk. Working with a consortium of farmers in upland Wales, a multi-scale experimental programme (from experimental plots to 2nd order catchments) was developed to address issues of upland agricultural intensification. This provided data support for a multi-scale modelling programme, in which highly detailed physics-based models were conditioned on the experimental data and used to explore effects of potential field-scale interventions. A meta-modelling strategy was developed to represent detailed modelling in a computationally-efficient manner for catchment-scale simulation; this allowed catchment-scale quantification of potential management options. For more general application to data-sparse areas, alternative approaches were needed. Physics-based models were developed for a range of upland management problems, including the restoration of drained peatlands, afforestation, and changing grazing practices. Their performance was explored using literature and surrogate data; although subject to high levels of uncertainty, important insights were obtained, of practical relevance to management decisions. In parallel, regionalised conceptual modelling was used to explore the potential of indices of catchment response, conditioned on readily-available catchment characteristics, to represent ungauged catchments subject to land management change. Although based in part on speculative relationships, significant predictive power was derived from this approach. Finally, using a formal Bayesian procedure, these different sources of information were combined with local flow data in a catchment-scale conceptual model application , i.e. using small-scale physical properties, regionalised signatures of flow and available flow measurements.
NASA Astrophysics Data System (ADS)
Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.
2017-12-01
A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.
Tailoring non-equilibrium atmospheric pressure plasmas for healthcare technologies
NASA Astrophysics Data System (ADS)
Gans, Timo
2012-10-01
Non-equilibrium plasmas operated at ambient atmospheric pressure are very efficient sources for energy transport through reactive neutral particles (radicals and metastables), charged particles (ions and electrons), UV radiation, and electro-magnetic fields. This includes the unique opportunity to deliver short-lived highly reactive species such as atomic oxygen and atomic nitrogen. Reactive oxygen and nitrogen species can initiate a wide range of reactions in biochemical systems, both therapeutic and toxic. The toxicological implications are not clear, e.g. potential risks through DNA damage. It is anticipated that interactions with biological systems will be governed through synergies between two or more species. Suitable optimized plasma sources are improbable through empirical investigations. Quantifying the power dissipation and energy transport mechanisms through the different interfaces from the plasma regime to ambient air, towards the liquid interface and associated impact on the biological system through a new regime of liquid chemistry initiated by the synergy of delivering multiple energy carrying species, is crucial. The major challenge to overcome the obstacles of quantifying energy transport and controlling power dissipation has been the severe lack of suitable plasma sources and diagnostic techniques. Diagnostics and simulations of this plasma regime are very challenging; the highly pronounced collision dominated plasma dynamics at very small dimensions requires extraordinary high resolution - simultaneously in space (microns) and time (picoseconds). Numerical simulations are equally challenging due to the inherent multi-scale character with very rapid electron collisions on the one extreme and the transport of chemically stable species characterizing completely different domains. This presentation will discuss our recent progress actively combining both advance optical diagnostics and multi-scale computer simulations.
Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)
NASA Astrophysics Data System (ADS)
Yeh, T. J.
2013-12-01
Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.
Rivera, Ana Leonor; Toledo-Roy, Juan C.; Ellis, Jason; Angelova, Maia
2017-01-01
Circadian rhythms become less dominant and less regular with chronic-degenerative disease, such that to accurately assess these pathological conditions it is important to quantify not only periodic characteristics but also more irregular aspects of the corresponding time series. Novel data-adaptive techniques, such as singular spectrum analysis (SSA), allow for the decomposition of experimental time series, in a model-free way, into a trend, quasiperiodic components and noise fluctuations. We compared SSA with the traditional techniques of cosinor analysis and intradaily variability using 1-week continuous actigraphy data in young adults with acute insomnia and healthy age-matched controls. The findings suggest a small but significant delay in circadian components in the subjects with acute insomnia, i.e. a larger acrophase, and alterations in the day-to-day variability of acrophase and amplitude. The power of the ultradian components follows a fractal 1/f power law for controls, whereas for those with acute insomnia this power law breaks down because of an increased variability at the 90min time scale, reminiscent of Kleitman’s basic rest-activity (BRAC) cycles. This suggests that for healthy sleepers attention and activity can be sustained at whatever time scale required by circumstances, whereas for those with acute insomnia this capacity may be impaired and these individuals need to rest or switch activities in order to stay focused. Traditional methods of circadian rhythm analysis are unable to detect the more subtle effects of day-to-day variability and ultradian rhythm fragmentation at the specific 90min time scale. PMID:28753669
Computer enhancement of radiographs
NASA Technical Reports Server (NTRS)
Dekaney, A.; Keane, J.; Desautels, J.
1973-01-01
Examination of three relevant noise processes and the image degradation associated with Marshall Space Flight Center's (MSFC) X-ray/scanning system was conducted for application to computer enhancement of radiographs using MSFC's digital filtering techniques. Graininess of type M, R single coat and R double coat X-ray films was quantified as a function of density level using root-mean-square (RMS) granularity. Quantum mottle (including film grain) was quantified as a function of the above film types, exposure level, specimen material and thickness, and film density using RMS granularity and power spectral density (PSD). For various neutral-density levels the scanning device used in digital conversion of radiographs was examined for noise characteristics which were quantified by RMS granularity and PSD. Image degradation of the entire pre-enhancement system (MG-150 X-ray device; film; and optronics scanner) was measured using edge targets to generate modulation transfer functions (MTF). The four parameters were examined as a function of scanning aperture sizes of approximately 12.5 25 and 50 microns.
Scale-dependent behavior of scale equations.
Kim, Pilwon
2009-09-01
We propose a new mathematical framework to formulate scale structures of general systems. Stack equations characterize a system in terms of accumulative scales. Their behavior at each scale level is determined independently without referring to other levels. Most standard geometries in mathematics can be reformulated in such stack equations. By involving interaction between scales, we generalize stack equations into scale equations. Scale equations are capable to accommodate various behaviors at different scale levels into one integrated solution. On contrary to standard geometries, such solutions often reveal eccentric scale-dependent figures, providing a clue to understand multiscale nature of the real world. Especially, it is suggested that the Gaussian noise stems from nonlinear scale interactions.
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
Separating Decision and Encoding Noise in Signal Detection Tasks
Cabrera, Carlos Alexander; Lu, Zhong-Lin; Dosher, Barbara Anne
2015-01-01
In this paper we develop an extension to the Signal Detection Theory (SDT) framework to separately estimate internal noise arising from representational and decision processes. Our approach constrains SDT models with decision noise by combining a multi-pass external noise paradigm with confidence rating responses. In a simulation study we present evidence that representation and decision noise can be separately estimated over a range of representative underlying representational and decision noise level configurations. These results also hold across a number of decision rules and show resilience to rule miss-specification. The new theoretical framework is applied to a visual detection confidence-rating task with three and five response categories. This study compliments and extends the recent efforts of researchers (Benjamin, Diaz, & Wee, 2009; Mueller & Weidemann, 2008; Rosner & Kochanski, 2009, Kellen, Klauer, & Singmann, 2012) to separate and quantify underlying sources of response variability in signal detection tasks. PMID:26120907
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
Combined Effects of High-Speed Railway Noise and Ground Vibrations on Annoyance
Yokoshima, Shigenori; Morihara, Takashi; Sato, Tetsumi; Yano, Takashi
2017-01-01
The Shinkansen super-express railway system in Japan has greatly increased its capacity and has expanded nationwide. However, many inhabitants in areas along the railways have been disturbed by noise and ground vibration from the trains. Additionally, the Shinkansen railway emits a higher level of ground vibration than conventional railways at the same noise level. These findings imply that building vibrations affect living environments as significantly as the associated noise. Therefore, it is imperative to quantify the effects of noise and vibration exposures on each annoyance under simultaneous exposure. We performed a secondary analysis using individual datasets of exposure and community response associated with Shinkansen railway noise and vibration. The data consisted of six socio-acoustic surveys, which were conducted separately over the last 20 years in Japan. Applying a logistic regression analysis to the datasets, we confirmed the combined effects of vibration/noise exposure on noise/vibration annoyance. Moreover, we proposed a representative relationship between noise and vibration exposures, and the prevalence of each annoyance associated with the Shinkansen railway. PMID:28749452
Multiscale Simulation Framework for Coupled Fluid Flow and Mechanical Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Thomas; Efendiev, Yalchin; Tchelepi, Hamdi
2016-05-24
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics.« less
Multiscale analysis and computation for flows in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efendiev, Yalchin; Hou, T. Y.; Durlofsky, L. J.
Our work in this project is aimed at making fundamental advances in multiscale methods for flow and transport in highly heterogeneous porous media. The main thrust of this research is to develop a systematic multiscale analysis and efficient coarse-scale models that can capture global effects and extend existing multiscale approaches to problems with additional physics and uncertainties. A key emphasis is on problems without an apparent scale separation. Multiscale solution methods are currently under active investigation for the simulation of subsurface flow in heterogeneous formations. These procedures capture the effects of fine-scale permeability variations through the calculation of specialized coarse-scalemore » basis functions. Most of the multiscale techniques presented to date employ localization approximations in the calculation of these basis functions. For some highly correlated (e.g., channelized) formations, however, global effects are important and these may need to be incorporated into the multiscale basis functions. Other challenging issues facing multiscale simulations are the extension of existing multiscale techniques to problems with additional physics, such as compressibility, capillary effects, etc. In our project, we explore the improvement of multiscale methods through the incorporation of additional (single-phase flow) information and the development of a general multiscale framework for flows in the presence of uncertainties, compressible flow and heterogeneous transport, and geomechanics. We have considered (1) adaptive local-global multiscale methods, (2) multiscale methods for the transport equation, (3) operator-based multiscale methods and solvers, (4) multiscale methods in the presence of uncertainties and applications, (5) multiscale finite element methods for high contrast porous media and their generalizations, and (6) multiscale methods for geomechanics. Below, we present a brief overview of each of these contributions.« less
Chen, Yun; Yang, Hui
2013-01-01
Heart rate variability (HRV) analysis has emerged as an important research topic to evaluate autonomic cardiac function. However, traditional time and frequency-domain analysis characterizes and quantify only linear and stationary phenomena. In the present investigation, we made a comparative analysis of three alternative approaches (i.e., wavelet multifractal analysis, Lyapunov exponents and multiscale entropy analysis) for quantifying nonlinear dynamics in heart rate time series. Note that these extracted nonlinear features provide information about nonlinear scaling behaviors and the complexity of cardiac systems. To evaluate the performance, we used 24-hour HRV recordings from 54 healthy subjects and 29 heart failure patients, available in PhysioNet. Three nonlinear methods are evaluated not only individually but also in combination using three classification algorithms, i.e., linear discriminate analysis, quadratic discriminate analysis and k-nearest neighbors. Experimental results show that three nonlinear methods capture nonlinear dynamics from different perspectives and the combined feature set achieves the best performance, i.e., sensitivity 97.7% and specificity 91.5%. Collectively, nonlinear HRV features are shown to have the promise to identify the disorders in autonomic cardiovascular function.
Hu, Xiao; Blemker, Silvia S
2015-08-01
Duchenne muscular dystrophy (DMD) is a genetic disease that occurs due to the deficiency of the dystrophin protein. Although dystrophin is deficient in all muscles, it is unclear why degeneration progresses differently across muscles in DMD. We hypothesized that each muscle undergoes a different degree of eccentric contraction during gait, which could contribute to the selective degeneration in lower limb muscle, as indicated by various amounts of fatty infiltration. By comparing eccentric contractions quantified from a previous multibody dynamic musculoskeletal gait simulation and fat fractions quantified in a recent imaging study, our preliminary analyses show a strong correlation between eccentric contractions during gait and lower limb muscle fat fractions, supporting our hypothesis. This knowledge is critical for developing safe exercise regimens for the DMD population. This study also provides supportive evidence for using multiscale modeling and simulation of the musculoskeletal system in future DMD research. © 2015 Wiley Periodicals, Inc.
Crystal Plasticity Model of Reactor Pressure Vessel Embrittlement in GRIZZLY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Biner, Suleyman Bulent; Zhang, Yongfeng
2015-07-01
The integrity of reactor pressure vessels (RPVs) is of utmost importance to ensure safe operation of nuclear reactors under extended lifetime. Microstructure-scale models at various length and time scales, coupled concurrently or through homogenization methods, can play a crucial role in understanding and quantifying irradiation-induced defect production, growth and their influence on mechanical behavior of RPV steels. A multi-scale approach, involving atomistic, meso- and engineering-scale models, is currently being pursued within the GRIZZLY project to understand and quantify irradiation-induced embrittlement of RPV steels. Within this framework, a dislocation-density based crystal plasticity model has been developed in GRIZZLY that captures themore » effect of irradiation-induced defects on the flow stress behavior and is presented in this report. The present formulation accounts for the interaction between self-interstitial loops and matrix dislocations. The model predictions have been validated with experiments and dislocation dynamics simulation.« less
Kirilina, Evgeniya; Yu, Na; Jelzow, Alexander; Wabnitz, Heidrun; Jacobs, Arthur M; Tachtsidis, Ilias
2013-01-01
Functional Near-Infrared Spectroscopy (fNIRS) is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis (WCA). Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP), heart rate (HR), and skin blood flow (SBF) on the forehead were recorded. WCA was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial SBF time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals.
NASA Astrophysics Data System (ADS)
Lian, Huan; Soulopoulos, Nikolaos; Hardalupas, Yannis
2017-09-01
The experimental evaluation of the topological characteristics of the turbulent flow in a `box' of homogeneous and isotropic turbulence (HIT) with zero mean velocity is presented. This requires an initial evaluation of the effect of signal noise on measurement of velocity invariants. The joint probability distribution functions (pdfs) of experimentally evaluated, noise contaminated, velocity invariants have a different shape than the corresponding noise-free joint pdfs obtained from the DNS data of the Johns Hopkins University (JHU) open resource HIT database. A noise model, based on Gaussian and impulsive Salt and Pepper noise, is established and added artificially to the DNS velocity vector field of the JHU database. Digital filtering methods, based on Median and Wiener Filters, are chosen to eliminate the modeled noise source and their capacity to restore the joint pdfs of velocity invariants to that of the noise-free DNS data is examined. The remaining errors after filtering are quantified by evaluating the global mean velocity, turbulent kinetic energy and global turbulent homogeneity, assessed through the behavior of the ratio of the standard deviation of the velocity fluctuations in two directions, the energy spectrum of the velocity fluctuations and the eigenvalues of the rate-of-strain tensor. A method of data filtering, based on median filtered velocity using different median filter window size, is used to quantify the clustering of zero velocity points of the turbulent field using the radial distribution function (RDF) and Voronoï analysis to analyze the 2D time-resolved particle image velocimetry (TR-PIV) velocity measurements. It was found that a median filter with window size 3 × 3 vector spacing is the effective and efficient approach to eliminate the experimental noise from PIV measured velocity images to a satisfactory level and extract the statistical two-dimensional topological turbulent flow patterns.
NASA Technical Reports Server (NTRS)
Norum, T. D.
1978-01-01
A 2.54 cm (1.00 in.) nozzle supplied with nitrogen was mounted above an automobile and driven over an asphalt roadway past stationary microphones in an attempt to quantify the effects of the vehicle motion on jet mixing noise. The nozzle was then tested in the Langley anechoic noise facility with a large free jet simulating the relative motion. The results are compared for these two methods of investigating forward speed effects on jet mixing noise. The vehicle results indicate a noise with forward speed throughout the Doppler-shifted static spectrum. This decrease across the entire frequency range was also apparent in the free-jet results. The similarity of the results indicates that the effects of flight on jet mixing noise can be predicted by simulation of forward speed with a free jet. Overall sound pressure levels were found to decrease with forward speed at all observation angles for both methods of testing.
NASA Astrophysics Data System (ADS)
Hati, Archita; Nelson, Craig W.; Pappas, David P.; Howe, David A.
2017-11-01
The cross-spectrum noise measurement technique enables enhanced resolution of spectral measurements. However, it has disadvantages, namely, increased complexity, inability of making real-time measurements, and bias due to the "cross-spectral collapse" (CSC) effect. The CSC can occur when the spectral density of a random process under investigation approaches the thermal noise of the power splitter. This effect can severely bias results due to a differential measurement between the investigated noise and the anti-correlated (phase-inverted) noise of the power splitter. In this paper, we report an accurate measurement of the phase noise of a thermally limited electronic oscillator operating at room temperature (300 K) without significant CSC bias. We mitigated the problem by cooling the power splitter to liquid helium temperature (4 K). We quantify errors of greater than 1 dB that occur when the thermal noise of the oscillator at room temperature is measured with the power splitter at temperatures above 77 K.
Underwater noise pollution in a coastal tropical environment.
Bittencourt, L; Carvalho, R R; Lailson-Brito, J; Azevedo, A F
2014-06-15
Underwater noise pollution has become a major concern in marine habitats. Guanabara Bay, southeastern Brazil, is an impacted area of economic importance with constant vessel traffic. One hundred acoustic recording sessions took place over ten locations. Sound sources operating within 1 km radius of each location were quantified during recordings. The highest mean sound pressure level near the surface was 111.56±9.0 dB re 1 μPa at the frequency band of 187 Hz. Above 15 kHz, the highest mean sound pressure level was 76.21±8.3 dB re 1 μPa at the frequency 15.89 kHz. Noise levels correlated with number of operating vessels and vessel traffic composition influenced noise profiles. Shipping locations had the highest noise levels, while small vessels locations had the lowest noise levels. Guanabara Bay showed noise pollution similar to that of other impacted coastal regions, which is related to shipping and vessel traffic. Copyright © 2014 Elsevier Ltd. All rights reserved.
Phellan, Renzo; Forkert, Nils D
2017-11-01
Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM, because vessels may vary from its tubular-shape in this case. Vessel enhancement algorithms can help to improve the accuracy of the segmentation of the vascular system. However, their contribution to accuracy has to be evaluated as it depends on the specific applications, and in some cases it can lead to a reduction of the overall accuracy. No specific filter was suitable for all tested scenarios. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao
Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was furthermore » improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.« less
Characterization of a hypersonic quiet wind tunnel nozzle
NASA Astrophysics Data System (ADS)
Sweeney, Cameron J.
The Boeing/AFOSR Mach-6 Quiet Tunnel at Purdue University has been able to achieve low-disturbance flows at high Reynolds numbers for approximately ten years. The flow in the nozzle was last characterized in 2010. However, researchers have noted that the performance of the nozzle has changed in the intervening years. Understand ing the tunnel characteristics is critical for the hypersonic boundary-layer transition research performed at the facility and any change in performance could have signif icant effects on research performed at the facility. Pitot probe measurements were made using Kulite and PCB pressure transducers to quantify the performance changes since characterization was last performed. Aspects of the nozzle that were investi gated include the radial uniformity of the flow, the effects that time and stagnation pressure have on the flow, and the Reynolds number limits of low-disturbance flows. Measurements showed that freestream noise levels are consistently around 0.01% to 0.02% for the majority of the quiet flow core, with quiet flow now achievable for Reynolds numbers up to Re = 13.0x10 6/m. Additionally, while pitot probes are a widely used measurement technique for quantifying freestream disturbances, pitot probes are not without drawbacks. In order to provide a more complete methodology for freestream noise measurement other researchers have started experimenting with alternate geometries, such as cones. Using a newly designed 30° half-angle cone model, measurements were performed to quantify the freestream noise in the BAM6QT and compare the performance with another hypersonic wind tunnel. Also, measurements were made with three newly designed pitot sleeves to study the effects of probe geometry on freestream noise measurements. The results were compared to recent DNS calculations.
Suppression and enhancement of transcriptional noise by DNA looping
NASA Astrophysics Data System (ADS)
Vilar, Jose M. G.; Saiz, Leonor
2014-06-01
DNA looping has been observed to enhance and suppress transcriptional noise but it is uncertain which of these two opposite effects is to be expected for given conditions. Here, we derive analytical expressions for the main quantifiers of transcriptional noise in terms of the molecular parameters and elucidate the role of DNA looping. Our results rationalize paradoxical experimental observations and provide the first quantitative explanation of landmark individual-cell measurements at the single molecule level on the classical lac operon genetic system [Choi, L. Cai, K. Frieda, and X. S. Xie, Science 322, 442 (2008), 10.1126/science.1161427].
Intensity transform and Wiener filter in measurement of blood flow in arteriography
NASA Astrophysics Data System (ADS)
Nunes, Polyana F.; Franco, Marcelo L. N.; Filho, João. B. D.; Patrocínio, Ana C.
2015-03-01
Using the arteriography examination, it is possible to check anomalies in blood vessels and diseases such as stroke, stenosis, bleeding and especially in the diagnosis of Encephalic Death in comatose individuals. Encephalic death can be diagnosed only when there is complete interruption of all brain functions, and hence the blood stream. During the examination, there may be some interference on the sensors, such as environmental factors, poor maintenance of equipment, patient movement, among other interference, which can directly affect the noise produced in angiography images. Then, we need to use digital image processing techniques to minimize this noise and improve the pixel count. Therefore, this paper proposes to use median filter and enhancement techniques for transformation of intensity using the sigmoid function together with the Wiener filter so you can get less noisy images. It's been realized two filtering techniques to remove the noise of images, one with the median filter and the other with the Wiener filter along the sigmoid function. For 14 tests quantified, including 7 Encephalic Death and 7 other cases, the technique that achieved a most satisfactory number of pixels quantified, also presenting a lesser amount of noise, is the Wiener filter sigmoid function, and in this case used with 0.03 cuttof.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
NASA Astrophysics Data System (ADS)
Herega, Alexander; Sukhanov, Volodymyr; Vyrovoy, Valery
2016-11-01
The multiplicative measure and estimation method of ordering of the nearest neighborhood at the multiscale "site" percolation problem are considered. In the report also is shown the possibility of quantifying a relative degree of order of two nearest neighborhoods, which is based on the algorithm proposed by one of the authors. Moreover, the model of the oscillatory component of interaction of inner boundaries of different scales is proposed. In the context of our report, the concept of lacunarity and effective dimension (introduced by B. Mandelbrot) is discussed as effective tools of mathematical modeling.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
NASA Astrophysics Data System (ADS)
Capone, Cristiano; Mattia, Maurizio
2017-01-01
Neural field models are powerful tools to investigate the richness of spatiotemporal activity patterns like waves and bumps, emerging from the cerebral cortex. Understanding how spontaneous and evoked activity is related to the structure of underlying networks is of central interest to unfold how information is processed by these systems. Here we focus on the interplay between local properties like input-output gain function and recurrent synaptic self-excitation of cortical modules, and nonlocal intermodular synaptic couplings yielding to define a multiscale neural field. In this framework, we work out analytic expressions for the wave speed and the stochastic diffusion of propagating fronts uncovering the existence of an optimal balance between local and nonlocal connectivity which minimizes the fluctuations of the activation front propagation. Incorporating an activity-dependent adaptation of local excitability further highlights the independent role that local and nonlocal connectivity play in modulating the speed of propagation of the activation and silencing wavefronts, respectively. Inhomogeneities in space of local excitability give raise to a novel hysteresis phenomenon such that the speed of waves traveling in opposite directions display different velocities in the same location. Taken together these results provide insights on the multiscale organization of brain slow-waves measured during deep sleep and anesthesia.
Annoyance caused by aircraft en route noise
NASA Astrophysics Data System (ADS)
McCurdy, David A.
1992-03-01
A laboratory experiment was conducted to quantify the annoyance response of people on the ground to enroute noise generated by aircraft at cruise conditions. The en route noises were ground level recordings of eight advanced turboprop aircraft flyovers and six conventional turbofan flyovers. The eight advanced turboprop enroute noises represented the NASA Propfan Test Assessment aircraft operating at different combinations of altitude, aircraft Mach number, and propeller tip speed. The conventional turbofan en route noises represented six different commercial airliners. The overall durations of the en route noises varied from approximately 40 to 160 sec. In the experiment, 32 subjects judged the annoyance of the en route noises as well as recordings of the takeoff and landing noises of each of 5 conventional turboprop and 5 conventional turbofan aircraft. Each of the noises was presented at three sound pressure levels to the subjects in an anechoic listening room. Analysis of the judgments found small differences in annoyance between three combinations of aircraft type and operation. Current tone and corrections did not significantly improve en route annoyance prediction. The optimum duration-correction magnitude for en route noise was approximately 1 dB per doubling of effective duration.
Annoyance caused by aircraft en route noise
NASA Technical Reports Server (NTRS)
Mccurdy, David A.
1992-01-01
A laboratory experiment was conducted to quantify the annoyance response of people on the ground to enroute noise generated by aircraft at cruise conditions. The en route noises were ground level recordings of eight advanced turboprop aircraft flyovers and six conventional turbofan flyovers. The eight advanced turboprop enroute noises represented the NASA Propfan Test Assessment aircraft operating at different combinations of altitude, aircraft Mach number, and propeller tip speed. The conventional turbofan en route noises represented six different commercial airliners. The overall durations of the en route noises varied from approximately 40 to 160 sec. In the experiment, 32 subjects judged the annoyance of the en route noises as well as recordings of the takeoff and landing noises of each of 5 conventional turboprop and 5 conventional turbofan aircraft. Each of the noises was presented at three sound pressure levels to the subjects in an anechoic listening room. Analysis of the judgments found small differences in annoyance between three combinations of aircraft type and operation. Current tone and corrections did not significantly improve en route annoyance prediction. The optimum duration-correction magnitude for en route noise was approximately 1 dB per doubling of effective duration.
Evaluation of ride quality prediction methods for operational military helicopters
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
Quantifying fluctuations of resting state networks using arterial spin labeling perfusion MRI
Varma, Gopal; Scheidegger, Rachel; Alsop, David C
2015-01-01
Blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) has been widely used to investigate spontaneous low-frequency signal fluctuations across brain resting state networks. However, BOLD only provides relative measures of signal fluctuations. Arterial Spin Labeling (ASL) MRI holds great potential for quantitative measurements of resting state network fluctuations. This study systematically quantified signal fluctuations of the large-scale resting state networks using ASL data from 20 healthy volunteers by separating them from global signal fluctuations and fluctuations caused by residual noise. Global ASL signal fluctuation was 7.59% ± 1.47% relative to the ASL baseline perfusion. Fluctuations of seven detected resting state networks vary from 2.96% ± 0.93% to 6.71% ± 2.35%. Fluctuations of networks and residual noise were 6.05% ± 1.18% and 6.78% ± 1.16% using 4-mm resolution ASL data applied with Gaussian smoothing kernel of 6mm. However, network fluctuations were reduced by 7.77% ± 1.56% while residual noise fluctuation was markedly reduced by 39.75% ± 2.90% when smoothing kernel of 12 mm was applied to the ASL data. Therefore, global and network fluctuations are the dominant structured noise sources in ASL data. Quantitative measurements of resting state networks may enable improved noise reduction and provide insights into the function of healthy and diseased brain. PMID:26661226
Quantifying fluctuations of resting state networks using arterial spin labeling perfusion MRI.
Dai, Weiying; Varma, Gopal; Scheidegger, Rachel; Alsop, David C
2016-03-01
Blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) has been widely used to investigate spontaneous low-frequency signal fluctuations across brain resting state networks. However, BOLD only provides relative measures of signal fluctuations. Arterial Spin Labeling (ASL) MRI holds great potential for quantitative measurements of resting state network fluctuations. This study systematically quantified signal fluctuations of the large-scale resting state networks using ASL data from 20 healthy volunteers by separating them from global signal fluctuations and fluctuations caused by residual noise. Global ASL signal fluctuation was 7.59% ± 1.47% relative to the ASL baseline perfusion. Fluctuations of seven detected resting state networks vary from 2.96% ± 0.93% to 6.71% ± 2.35%. Fluctuations of networks and residual noise were 6.05% ± 1.18% and 6.78% ± 1.16% using 4-mm resolution ASL data applied with Gaussian smoothing kernel of 6mm. However, network fluctuations were reduced by 7.77% ± 1.56% while residual noise fluctuation was markedly reduced by 39.75% ± 2.90% when smoothing kernel of 12 mm was applied to the ASL data. Therefore, global and network fluctuations are the dominant structured noise sources in ASL data. Quantitative measurements of resting state networks may enable improved noise reduction and provide insights into the function of healthy and diseased brain. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Allec, N.; Abbaszadeh, S.; Scott, C. C.; Lewin, J. M.; Karim, K. S.
2012-12-01
In contrast-enhanced mammography (CEM), the dual-energy dual-exposure technique, which can leverage existing conventional mammography infrastructure, relies on acquiring the low- and high-energy images using two separate exposures. The finite time between image acquisition leads to motion artifacts in the combined image. Motion artifacts can lead to greater anatomical noise in the combined image due to increased mismatch of the background tissue in the images to be combined, however the impact has not yet been quantified. In this study we investigate a method to include motion artifacts in the dual-energy noise and performance analysis. The motion artifacts are included via an extended cascaded systems model. To validate the model, noise power spectra of a previous dual-energy clinical study are compared to that of the model. The ideal observer detectability is used to quantify the effect of motion artifacts on tumor detectability. It was found that the detectability can be significantly degraded when motion is present (e.g., detectability of 2.5 mm radius tumor decreased by approximately a factor of 2 for translation motion on the order of 1000 μm). The method presented may be used for a more comprehensive theoretical noise and performance analysis and fairer theoretical performance comparison between dual-exposure techniques, where motion artifacts are present, and single-exposure techniques, where low- and high-energy images are acquired simultaneously and motion artifacts are absent.
Allec, N; Abbaszadeh, S; Scott, C C; Lewin, J M; Karim, K S
2012-12-21
In contrast-enhanced mammography (CEM), the dual-energy dual-exposure technique, which can leverage existing conventional mammography infrastructure, relies on acquiring the low- and high-energy images using two separate exposures. The finite time between image acquisition leads to motion artifacts in the combined image. Motion artifacts can lead to greater anatomical noise in the combined image due to increased mismatch of the background tissue in the images to be combined, however the impact has not yet been quantified. In this study we investigate a method to include motion artifacts in the dual-energy noise and performance analysis. The motion artifacts are included via an extended cascaded systems model. To validate the model, noise power spectra of a previous dual-energy clinical study are compared to that of the model. The ideal observer detectability is used to quantify the effect of motion artifacts on tumor detectability. It was found that the detectability can be significantly degraded when motion is present (e.g., detectability of 2.5 mm radius tumor decreased by approximately a factor of 2 for translation motion on the order of 1000 μm). The method presented may be used for a more comprehensive theoretical noise and performance analysis and fairer theoretical performance comparison between dual-exposure techniques, where motion artifacts are present, and single-exposure techniques, where low- and high-energy images are acquired simultaneously and motion artifacts are absent.
Seismoelectric data processing for surface surveys of shallow targets
Haines, S.S.; Guitton, A.; Biondi, B.
2007-01-01
The utility of the seismoelectric method relies on the development of methods to extract the signal of interest from background and source-generated coherent noise that may be several orders-of-magnitude stronger. We compare data processing approaches to develop a sequence of preprocessing and signal/noise separation and to quantify the noise level from which we can extract signal events. Our preferred sequence begins with the removal of power line harmonic noise and the use of frequency filters to minimize random and source-generated noise. Mapping to the linear Radon domain with an inverse process incorporating a sparseness constraint provides good separation of signal from noise, though it is ineffective on noise that shows the same dip as the signal. Similarly, the seismoelectric signal and noise do not separate cleanly in the Fourier domain, so f-k filtering can not remove all of the source-generated noise and it also disrupts signal amplitude patterns. We find that prediction-error filters provide the most effective method to separate signal and noise, while also preserving amplitude information, assuming that adequate pattern models can be determined for the signal and noise. These Radon-domain and prediction-error-filter methods successfully separate signal from <33 dB stronger noise in our test data. ?? 2007 Society of Exploration Geophysicists.
Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.
Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian
2010-03-01
In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.
Wainwright, Haruko M; Seki, Akiyuki; Chen, Jinsong; Saito, Kimiaki
2017-02-01
This paper presents a multiscale data integration method to estimate the spatial distribution of air dose rates in the regional scale around the Fukushima Daiichi Nuclear Power Plant. We integrate various types of datasets, such as ground-based walk and car surveys, and airborne surveys, all of which have different scales, resolutions, spatial coverage, and accuracy. This method is based on geostatistics to represent spatial heterogeneous structures, and also on Bayesian hierarchical models to integrate multiscale, multi-type datasets in a consistent manner. The Bayesian method allows us to quantify the uncertainty in the estimates, and to provide the confidence intervals that are critical for robust decision-making. Although this approach is primarily data-driven, it has great flexibility to include mechanistic models for representing radiation transport or other complex correlations. We demonstrate our approach using three types of datasets collected at the same time over Fukushima City in Japan: (1) coarse-resolution airborne surveys covering the entire area, (2) car surveys along major roads, and (3) walk surveys in multiple neighborhoods. Results show that the method can successfully integrate three types of datasets and create an integrated map (including the confidence intervals) of air dose rates over the domain in high resolution. Moreover, this study provides us with various insights into the characteristics of each dataset, as well as radiocaesium distribution. In particular, the urban areas show high heterogeneity in the contaminant distribution due to human activities as well as large discrepancy among different surveys due to such heterogeneity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
Refined Composite Multiscale Dispersion Entropy and its Application to Biomedical Signals.
Azami, Hamed; Rostaghi, Mostafa; Abasolo, Daniel; Escudero, Javier
2017-12-01
We propose a novel complexity measure to overcome the deficiencies of the widespread and powerful multiscale entropy (MSE), including, MSE values may be undefined for short signals, and MSE is slow for real-time applications. We introduce multiscale dispersion entropy (DisEn-MDE) as a very fast and powerful method to quantify the complexity of signals. MDE is based on our recently developed DisEn, which has a computation cost of O(N), compared with O(N 2 ) for sample entropy used in MSE. We also propose the refined composite MDE (RCMDE) to improve the stability of MDE. We evaluate MDE, RCMDE, and refined composite MSE (RCMSE) on synthetic signals and three biomedical datasets. The MDE, RCMDE, and RCMSE methods show similar results, although the MDE and RCMDE are faster, lead to more stable results, and discriminate different types of physiological signals better than MSE and RCMSE. For noisy short and long time series, MDE and RCMDE are noticeably more stable than MSE and RCMSE, respectively. For short signals, MDE and RCMDE, unlike MSE and RCMSE, do not lead to undefined values. The proposed MDE and RCMDE are significantly faster than MSE and RCMSE, especially for long signals, and lead to larger differences between physiological conditions known to alter the complexity of the physiological recordings. MDE and RCMDE are expected to be useful for the analysis of physiological signals thanks to their ability to distinguish different types of dynamics. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/1982.
Antonelli, Cristian; Mecozzi, Antonio; Shtaif, Mark; Winzer, Peter J
2015-02-09
Mode-dependent loss (MDL) is a major factor limiting the achievable information rate in multiple-input multiple-output space-division multiplexed systems. In this paper we show that its impact on system performance, which we quantify in terms of the capacity reduction relative to a reference MDL-free system, may depend strongly on the operation of the inline optical amplifiers. This dependency is particularly strong in low mode-count systems. In addition, we discuss ways in which the signal-to-noise ratio of the MDL-free reference system can be defined and quantify the differences in the predicted capacity loss. Finally, we stress the importance of correctly accounting for the effect of MDL on the accumulation of amplification noise.
Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.
2010-01-01
In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.
Measurements of atmospheric turbulence effects on tail rotor acoustics
NASA Technical Reports Server (NTRS)
Hagen, Martin J.; Yamauchi, Gloria K.; Signor, David B.; Mosher, Marianne
1994-01-01
Results from an outdoor hover test of a full-scale Lynx tail rotor are presented. The investigation was designed to further the understanding of the acoustics of an isolated tail rotor hovering out-of-ground effect in atmospheric turbulence, without the effects of the main rotor wake or other helicopter components. Measurements include simultaneous rotor performance, noise, inflow, and far-field atmospheric turbulence. Results with grid-generated inflow turbulence are also presented. The effects of atmospheric turbulence ingestion on rotor noise are quantified. In contradiction to current theories, increasing rotor inflow and rotor thrust were found to increase turbulence ingestion noise. This is the final report of Task 13A--Helicopter Tail Rotor Noise, of the NASA/United Kingdom Defense Research Agency cooperative Aeronautics Research Program.
Improved surface-wave retrieval from ambient seismic noise by multi-dimensional deconvolution
NASA Astrophysics Data System (ADS)
Wapenaar, Kees; Ruigrok, Elmer; van der Neut, Joost; Draganov, Deyan
2011-01-01
The methodology of surface-wave retrieval from ambient seismic noise by crosscorrelation relies on the assumption that the noise field is equipartitioned. Deviations from equipartitioning degrade the accuracy of the retrieved surface-wave Green's function. A point-spread function, derived from the same ambient noise field, quantifies the smearing in space and time of the virtual source of the Green's function. By multidimensionally deconvolving the retrieved Green's function by the point-spread function, the virtual source becomes better focussed in space and time and hence the accuracy of the retrieved surface-wave Green's function may improve significantly. We illustrate this at the hand of a numerical example and discuss the advantages and limitations of this new methodology.
Quantifying Errors in Jet Noise Research Due to Microphone Support Reflection
NASA Technical Reports Server (NTRS)
Nallasamy, Nambi; Bridges, James
2002-01-01
The reflection coefficient of a microphone support structure used insist noise testing is documented through tests performed in the anechoic AeroAcoustic Propulsion Laboratory. The tests involve the acquisition of acoustic data from a microphone mounted in the support structure while noise is generated from a known broadband source. The ratio of reflected signal amplitude to the original signal amplitude is determined by performing an auto-correlation function on the data. The documentation of the reflection coefficients is one component of the validation of jet noise data acquired using the given microphone support structure. Finally. two forms of acoustic material were applied to the microphone support structure to determine their effectiveness in reducing reflections which give rise to bias errors in the microphone measurements.
Ellingson, Roger M.; Gallun, Frederick J.; Bock, Guillaume
2015-01-01
It can be problematic to measure stationary acoustic sound pressure level in any environment when the target level approaches or lies below the minimum measureable sound pressure level of the measurement system itself. This minimum measureable level, referred to as the inherent measurement system noise floor, is generally established by noise emission characteristics of measurement system components such as microphones, preamplifiers, and other system circuitry. In this paper, methods are presented and shown accurate measuring stationary levels within 20 dB above and below this system noise floor. Methodology includes (1) measuring inherent measurement system noise, (2) subtractive energy based, inherent noise adjustment of levels affected by system noise floor, and (3) verifying accuracy of inherent noise adjustment technique. While generalizable to other purposes, the techniques presented here were specifically developed to quantify ambient noise levels in very quiet rooms used to evaluate free-field human hearing thresholds. Results obtained applying the methods to objectively measure and verify the ambient noise level in an extremely quiet room, using various measurement system noise floors and analysis bandwidths, are presented and discussed. The verified results demonstrate the adjustment method can accurately extend measurement range to 20 dB below the measurement system noise floor, and how measurement system frequency bandwidth can affect accuracy of reported noise levels. PMID:25786932
Seismically observed seiching in the Panama Canal
McNamara, D.E.; Ringler, A.T.; Hutt, C.R.; Gee, L.S.
2011-01-01
A large portion of the seismic noise spectrum is dominated by water wave energy coupled into the solid Earth. Distinct mechanisms of water wave induced ground motions are distinguished by their spectral content. For example, cultural noise is generally <1 s period, microseisms dominate the seismic spectrum from periods of 2 to 20 s, and the Earth's "hum" is in the range of 50 to 600 s. We show that in a large lake in the Panama Canal there is an additional source of long-period noise generated by standing water waves, seiches, induced by disturbances such as passing ships and wind pressure. We compare seismic waveforms to water level records and relate these observations to changes in local tilt and gravity due to an oscillating seiche. The methods and observations discussed in this paper provide a first step toward quantifying the impact of water inundation as recorded by seismometers. This type of quantified understanding of water inundation will help in future estimates of similar phenomena such as the seismic observations of tsunami impact. Copyright 2011 by the American Geophysical Union.
A novel coupling of noise reduction algorithms for particle flow simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.
2016-09-15
Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less
NASA Astrophysics Data System (ADS)
To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian
2008-09-01
Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.
NASA Astrophysics Data System (ADS)
Simon, Patrick; Schneider, Peter
2017-08-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We have employed general adaptive moments (GLAM ) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. The GLAM ellipticity has useful properties for any chosen weight profile: the weighted ellipticity is identical to that of isophotes of elliptical images, and in absence of noise and pixellation it is always an unbiased estimator of reduced shear. We show that moment-based techniques, adaptive or unweighted, are similar to a model-based approach in the sense that they can be seen as imperfect fit of an elliptical profile to the image. Due to residuals in the fit, moment-based estimates of ellipticities are prone to underfitting bias when inferred from observed images. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimised estimator for the pre-seeing GLAM ellipticity and quantify its bias for noise-free images. To deal with images where pixel noise is prominent, we consider a Bayesian approach to infer GLAM ellipticity where, similar to the noise-free case, the ellipticity posterior can be inconsistent with the true ellipticity if we do not properly account for our ignorance about fit residuals. This underfitting bias, quantified in the paper, does not vary with the overall noise level but changes with the pre-seeing brightness profile and the correlation or heterogeneity of pixel noise over the image. Furthermore, when inferring a constant ellipticity or, more relevantly, constant shear from a source sample with a distribution of intrinsic properties (sizes, centroid positions, intrinsic shapes), an additional, now noise-dependent bias arises towards low signal-to-noise if incorrect prior densities for the intrinsic properties are used. We discuss the origin of this prior bias. With regard to a fully-Bayesian lensing analysis, we point out that passing tests with source samples subject to constant shear may not be sufficient for an analysis of sources with varying shear.
NASA Astrophysics Data System (ADS)
Horstemeyer, M. F.
This review of multiscale modeling covers a brief history of various multiscale methodologies related to solid materials and the associated experimental influences, the various influence of multiscale modeling on different disciplines, and some examples of multiscale modeling in the design of structural components. Although computational multiscale modeling methodologies have been developed in the late twentieth century, the fundamental notions of multiscale modeling have been around since da Vinci studied different sizes of ropes. The recent rapid growth in multiscale modeling is the result of the confluence of parallel computing power, experimental capabilities to characterize structure-property relations down to the atomic level, and theories that admit multiple length scales. The ubiquitous research that focus on multiscale modeling has broached different disciplines (solid mechanics, fluid mechanics, materials science, physics, mathematics, biological, and chemistry), different regions of the world (most continents), and different length scales (from atoms to autos).
A multi-band environment-adaptive approach to noise suppression for cochlear implants.
Saki, Fatemeh; Mirzahasanloo, Taher; Kehtarnavaz, Nasser
2014-01-01
This paper presents an improved environment-adaptive noise suppression solution for the cochlear implants speech processing pipeline. This improvement is achieved by using a multi-band data-driven approach in place of a previously developed single-band data-driven approach. Seven commonly encountered noisy environments of street, car, restaurant, mall, bus, pub and train are considered to quantify the improvement. The results obtained indicate about 10% improvement in speech quality measures.
Quantifying Hurricane Wind Speed with Undersea Sound
2006-06-01
even detect hurricanes using practical linear arrays at long ranges in these environments. 2.6 Conclusions We have shown that the wind- generated noise...application in other seismic research where a sensor on land measures signals generated by sources at sea. For example undersea earthquakes [124] and...at 100 Hz for a 64-element A/2-spaced horizontal broadside array as a function of steering angle for hurricane generated noise in the North Atlantic
Asymmetric noise-induced large fluctuations in coupled systems
NASA Astrophysics Data System (ADS)
Schwartz, Ira B.; Szwaykowska, Klimka; Carr, Thomas W.
2017-10-01
Networks of interacting, communicating subsystems are common in many fields, from ecology, biology, and epidemiology to engineering and robotics. In the presence of noise and uncertainty, interactions between the individual components can lead to unexpected complex system-wide behaviors. In this paper, we consider a generic model of two weakly coupled dynamical systems, and we show how noise in one part of the system is transmitted through the coupling interface. Working synergistically with the coupling, the noise on one system drives a large fluctuation in the other, even when there is no noise in the second system. Moreover, the large fluctuation happens while the first system exhibits only small random oscillations. Uncertainty effects are quantified by showing how characteristic time scales of noise-induced switching scale as a function of the coupling between the two coupled parts of the experiment. In addition, our results show that the probability of switching in the noise-free system scales inversely as the square of reduced noise intensity amplitude, rendering the virtual probability of switching an extremely rare event. Our results showing the interplay between transmitted noise and coupling are also confirmed through simulations, which agree quite well with analytic theory.
Impulsive noise of printers: measurement metrics and their subjective correlation
NASA Astrophysics Data System (ADS)
Baird, Terrence; Otto, Norman; Bray, Wade; Stephan, Mike
2005-09-01
In the office and home computing environments, printer impulsive noise has become a significant contributor to user perceived quality or lack thereof, and can affect the user's comfort level and ability to concentrate. Understanding and quantifying meaningful metrics for printer impulsivity is becoming an increasingly important goal for printer manufacturers. Several methods exist in international standards for measuring the impulsivity of noise. For information technology equipment (ITE), the method for detection of impulsive noise is provided in ECMA-74 and ISO 7779. However, there is a general acknowledgement that the current standard method of determining impulsivity by simply measuring A-weighted sound pressure level (SPL) with the impulsive time weighting, I, applied is inadequate to characterize impulsive noise and ultimately to predict user satisfaction and acceptance. In recent years, there has been a variety of new measurement methods evaluated for impulsive noise for both environmental and machinery noise. This paper reviews several of the available metrics, applies the metrics to several printer impulsive noise sources, and makes an initial assessment of their correlation to the subjective impressions of users. It is a review and continuation of the work presented at InterNoise 2005 (Baird, Bray, and Otto).
Wayne, Peter M.; Gow, Brian J.; Costa, Madalena D.; Peng, C.-K.; Lipsitz, Lewis A.; Hausdorff, Jeffrey M.; Davis, Roger B.; Walsh, Jacquelyn N.; Lough, Matthew; Novak, Vera; Yeh, Gloria Y.; Ahn, Andrew C.; Macklin, Eric A.; Manor, Brad
2014-01-01
Background Diminished control of standing balance, traditionally indicated by greater postural sway magnitude and speed, is associated with falls in older adults. Tai Chi (TC) is a multisystem intervention that reduces fall risk, yet its impact on sway measures vary considerably. We hypothesized that TC improves the integrated function of multiple control systems influencing balance, quantifiable by the multi-scale “complexity” of postural sway fluctuations. Objectives To evaluate both traditional and complexity-based measures of sway to characterize the short- and potential long-term effects of TC training on postural control and the relationships between sway measures and physical function in healthy older adults. Methods A cross-sectional comparison of standing postural sway in healthy TC-naïve and TC-expert (24.5±12 yrs experience) adults. TC-naïve participants then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Postural sway was assessed before and after the training during standing on a force-plate with eyes-open (EO) and eyes-closed (EC). Anterior-posterior (AP) and medio-lateral (ML) sway speed, magnitude, and complexity (quantified by multiscale entropy) were calculated. Single-legged standing time and Timed-Up–and-Go tests characterized physical function. Results At baseline, compared to TC-naïve adults (n = 60, age 64.5±7.5 yrs), TC-experts (n = 27, age 62.8±7.5 yrs) exhibited greater complexity of sway in the AP EC (P = 0.023), ML EO (P<0.001), and ML EC (P<0.001) conditions. Traditional measures of sway speed and magnitude were not significantly lower among TC-experts. Intention-to-treat analyses indicated no significant effects of short-term TC training; however, increases in AP EC and ML EC complexity amongst those randomized to TC were positively correlated with practice hours (P = 0.044, P = 0.018). Long- and short-term TC training were positively associated with physical function. Conclusion Multiscale entropy offers a complementary approach to traditional COP measures for characterizing sway during quiet standing, and may be more sensitive to the effects of TC in healthy adults. Trial Registration ClinicalTrials.gov NCT01340365 PMID:25494333
Wayne, Peter M; Gow, Brian J; Costa, Madalena D; Peng, C-K; Lipsitz, Lewis A; Hausdorff, Jeffrey M; Davis, Roger B; Walsh, Jacquelyn N; Lough, Matthew; Novak, Vera; Yeh, Gloria Y; Ahn, Andrew C; Macklin, Eric A; Manor, Brad
2014-01-01
Diminished control of standing balance, traditionally indicated by greater postural sway magnitude and speed, is associated with falls in older adults. Tai Chi (TC) is a multisystem intervention that reduces fall risk, yet its impact on sway measures vary considerably. We hypothesized that TC improves the integrated function of multiple control systems influencing balance, quantifiable by the multi-scale "complexity" of postural sway fluctuations. To evaluate both traditional and complexity-based measures of sway to characterize the short- and potential long-term effects of TC training on postural control and the relationships between sway measures and physical function in healthy older adults. A cross-sectional comparison of standing postural sway in healthy TC-naïve and TC-expert (24.5±12 yrs experience) adults. TC-naïve participants then completed a 6-month, two-arm, wait-list randomized clinical trial of TC training. Postural sway was assessed before and after the training during standing on a force-plate with eyes-open (EO) and eyes-closed (EC). Anterior-posterior (AP) and medio-lateral (ML) sway speed, magnitude, and complexity (quantified by multiscale entropy) were calculated. Single-legged standing time and Timed-Up-and-Go tests characterized physical function. At baseline, compared to TC-naïve adults (n = 60, age 64.5±7.5 yrs), TC-experts (n = 27, age 62.8±7.5 yrs) exhibited greater complexity of sway in the AP EC (P = 0.023), ML EO (P<0.001), and ML EC (P<0.001) conditions. Traditional measures of sway speed and magnitude were not significantly lower among TC-experts. Intention-to-treat analyses indicated no significant effects of short-term TC training; however, increases in AP EC and ML EC complexity amongst those randomized to TC were positively correlated with practice hours (P = 0.044, P = 0.018). Long- and short-term TC training were positively associated with physical function. Multiscale entropy offers a complementary approach to traditional COP measures for characterizing sway during quiet standing, and may be more sensitive to the effects of TC in healthy adults. ClinicalTrials.gov NCT01340365.
How thin barrier metal can be used to prevent Co diffusion in the modern integrated circuits?
NASA Astrophysics Data System (ADS)
Dixit, Hemant; Konar, Aniruddha; Pandey, Rajan; Ethirajan, Tamilmani
2017-11-01
In modern integrated circuits (ICs), billions of transistors are connected to each other via thin metal layers (e.g. copper, cobalt, etc) known as interconnects. At elevated process temperatures, inter-diffusion of atomic species can occur among these metal layers, causing sub-optimal performance of interconnects, which may lead to the failure of an IC. Thus, typically a thin barrier metal layer is used to prevent the inter-diffusion of atomic species within interconnects. For ICs with sub-10 nm transistors (10 nm technology node), the design rule (thickness scaling) demands the thinnest possible barrier layer. Therefore, here we investigate the critical thickness of a titanium-nitride (TiN) barrier that can prevent the cobalt diffusion using multi-scale modeling and simulations. First, we compute the Co diffusion barrier in crystalline and amorphous TiN with the nudged elastic band method within first-principles density functional theory simulations. Later, using the calculated activation energy barriers, we quantify the Co diffusion length in the TiN metal layer with the help of kinetic Monte Carlo simulations. Such a multi-scale modelling approach yields an exact critical thickness of the metal layer sufficient to prevent the Co diffusion in IC interconnects. We obtain a diffusion length of a maximum of 2 nm for a typical process of thermal annealing at 400 °C for 30 min. Our study thus provides useful physical insights for the Co diffusion in the TiN layer and further quantifies the critical thickness (~2 nm) to which the metal barrier layer can be thinned down for sub-10 nm ICs.
NASA Astrophysics Data System (ADS)
Hardebol, N. J.; Maier, C.; Nick, H.; Geiger, S.; Bertotti, G.; Boro, H.
2015-12-01
A fracture network arrangement is quantified across an isolated carbonate platform from outcrop and aerial imagery to address its impact on fluid flow. The network is described in terms of fracture density, orientation, and length distribution parameters. Of particular interest is the role of fracture cross connections and abutments on the effective permeability. Hence, the flow simulations explicitly account for network topology by adopting Discrete-Fracture-and-Matrix description. The interior of the Latemar carbonate platform (Dolomites, Italy) is taken as outcrop analogue for subsurface reservoirs of isolated carbonate build-ups that exhibit a fracture-dominated permeability. New is our dual strategy to describe the fracture network both as deterministic- and stochastic-based inputs for flow simulations. The fracture geometries are captured explicitly and form a multiscale data set by integration of interpretations from outcrops, airborne imagery, and lidar. The deterministic network descriptions form the basis for descriptive rules that are diagnostic of the complex natural fracture arrangement. The fracture networks exhibit a variable degree of multitier hierarchies with smaller-sized fractures abutting against larger fractures under both right and oblique angles. The influence of network topology on connectivity is quantified using Discrete-Fracture-Single phase fluid flow simulations. The simulation results show that the effective permeability for the fracture and matrix ensemble can be 50 to 400 times higher than the matrix permeability of 1.0 · 10-14 m2. The permeability enhancement is strongly controlled by the connectivity of the fracture network. Therefore, the degree of intersecting and abutting fractures should be captured from outcrops with accuracy to be of value as analogue.
Multi-scale characterization of topographic anisotropy
NASA Astrophysics Data System (ADS)
Roy, S. G.; Koons, P. O.; Osti, B.; Upton, P.; Tucker, G. E.
2016-05-01
We present the every-direction variogram analysis (EVA) method for quantifying orientation and scale dependence of topographic anisotropy to aid in differentiation of the fluvial and tectonic contributions to surface evolution. Using multi-directional variogram statistics to track the spatial persistence of elevation values across a landscape, we calculate anisotropy as a multiscale, direction-sensitive variance in elevation between two points on a surface. Tectonically derived topographic anisotropy is associated with the three-dimensional kinematic field, which contributes (1) differential surface displacement and (2) crustal weakening along fault structures, both of which amplify processes of surface erosion. Based on our analysis, tectonic displacements dominate the topographic field at the orogenic scale, while a combination of the local displacement and strength fields are well represented at the ridge and valley scale. Drainage network patterns tend to reflect the geometry of underlying active or inactive tectonic structures due to the rapid erosion of faults and differential uplift associated with fault motion. Regions that have uniform environmental conditions and have been largely devoid of tectonic strain, such as passive coastal margins, have predominantly isotropic topography with typically dendritic drainage network patterns. Isolated features, such as stratovolcanoes, are nearly isotropic at their peaks but exhibit a concentric pattern of anisotropy along their flanks. The methods we provide can be used to successfully infer the settings of past or present tectonic regimes, and can be particularly useful in predicting the location and orientation of structural features that would otherwise be impossible to elude interpretation in the field. Though we limit the scope of this paper to elevation, EVA can be used to quantify the anisotropy of any spatially variable property.
Annoyance caused by propeller airplane flyover noise
NASA Technical Reports Server (NTRS)
Mccurdy, D. A.; Powell, C. A.
1984-01-01
Laboratory experiments were conducted to provide information on quantifying the annoyance response of people to propeller airplane noise. The items of interest were current noise metrics, tone corrections, duration corrections, critical band corrections, and the effects of engine type, operation type, maximum takeoff weight, blade passage frequency, and blade tip speed. In each experiment, 64 subjects judged the annoyance of recordings of propeller and jet airplane operations presented at d-weighted sound pressure levels of 70, 80, and 90 dB in a testing room which simulates the outdoor acoustic environment. The first experiment examined 11 propeller airplanes with maximum takeoff weights greater than or equal to 5700 kg. The second experiment examined 14 propeller airplanes weighting 5700 kg or less. Five jet airplanes were included in each experiment. For both the heavy and light propeller airplanes, perceived noise level and perceived level (Stevens Mark VII procedure) predicted annoyance better than other current noise metrics.
The Intensity, Directionality, and Statistics of Underwater Noise From Melting Icebergs
NASA Astrophysics Data System (ADS)
Glowacki, Oskar; Deane, Grant B.; Moskalik, Mateusz
2018-05-01
Freshwater fluxes from melting icebergs and glaciers are important contributors to both sea level rise and anomalies of seawater salinity in polar regions. However, the hazards encountered close to icebergs and glaciers make it difficult to quantify their melt rates directly, motivating the development of cryoacoustics as a remote sensing technique. Recent studies have shown a qualitative link between ice melting and the accompanying underwater noise, but the properties of this signal remain poorly understood. Here we examine the intensity, directionality, and temporal statistics of the underwater noise radiated by melting icebergs in Hornsund Fjord, Svalbard, using a three-element acoustic array. We present the first estimate of noise energy per unit area associated with iceberg melt and demonstrate its qualitative dependence on exposure to surface current. Finally, we show that the analysis of noise directionality and statistics makes it possible to distinguish iceberg melt from the glacier terminus melt.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
NASA Astrophysics Data System (ADS)
Piao, Lin; Fu, Zuntao
2016-11-01
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
Piao, Lin; Fu, Zuntao
2016-11-09
Cross-correlation between pairs of variables takes multi-time scale characteristic, and it can be totally different on different time scales (changing from positive correlation to negative one), e.g., the associations between mean air temperature and relative humidity over regions to the east of Taihang mountain in China. Therefore, how to correctly unveil these correlations on different time scales is really of great importance since we actually do not know if the correlation varies with scales in advance. Here, we compare two methods, i.e. Detrended Cross-Correlation Analysis (DCCA for short) and Pearson correlation, in quantifying scale-dependent correlations directly to raw observed records and artificially generated sequences with known cross-correlation features. Studies show that 1) DCCA related methods can indeed quantify scale-dependent correlations, but not Pearson method; 2) the correlation features from DCCA related methods are robust to contaminated noises, however, the results from Pearson method are sensitive to noise; 3) the scale-dependent correlation results from DCCA related methods are robust to the amplitude ratio between slow and fast components, while Pearson method may be sensitive to the amplitude ratio. All these features indicate that DCCA related methods take some advantages in correctly quantifying scale-dependent correlations, which results from different physical processes.
NASA Astrophysics Data System (ADS)
Mohammad, Yasir K.; Pavlova, Olga N.; Pavlov, Alexey N.
2016-04-01
We discuss the problem of quantifying chaotic dynamics at the input of the "integrate-and-fire" (IF) model from the output sequences of interspike intervals (ISIs) for the case when the fluctuating threshold level leads to the appearance of noise in ISI series. We propose a way to detect an ability of computing dynamical characteristics of the input dynamics and the level of noise in the output point processes. The proposed approach is based on the dependence of the largest Lyapunov exponent from the maximal orientation error used at the estimation of the averaged rate of divergence of nearby phase trajectories.
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
NASA Astrophysics Data System (ADS)
Li, Yongbo; Xu, Minqiang; Wang, Rixin; Huang, Wenhu
2016-01-01
This paper presents a new rolling bearing fault diagnosis method based on local mean decomposition (LMD), improved multiscale fuzzy entropy (IMFE), Laplacian score (LS) and improved support vector machine based binary tree (ISVM-BT). When the fault occurs in rolling bearings, the measured vibration signal is a multi-component amplitude-modulated and frequency-modulated (AM-FM) signal. LMD, a new self-adaptive time-frequency analysis method can decompose any complicated signal into a series of product functions (PFs), each of which is exactly a mono-component AM-FM signal. Hence, LMD is introduced to preprocess the vibration signal. Furthermore, IMFE that is designed to avoid the inaccurate estimation of fuzzy entropy can be utilized to quantify the complexity and self-similarity of time series for a range of scales based on fuzzy entropy. Besides, the LS approach is introduced to refine the fault features by sorting the scale factors. Subsequently, the obtained features are fed into the multi-fault classifier ISVM-BT to automatically fulfill the fault pattern identifications. The experimental results validate the effectiveness of the methodology and demonstrate that proposed algorithm can be applied to recognize the different categories and severities of rolling bearings.
NASA Astrophysics Data System (ADS)
Riede, Tobias; Mitchell, Brian R.; Tokuda, Isao; Owren, Michael J.
2005-07-01
Measuring noise as a component of mammalian vocalizations is of interest because of its potential relevance to the communicative function. However, methods for characterizing and quantifying noise are less well established than methods applicable to harmonically structured aspects of signals. Using barks of coyotes and domestic dogs, we compared six acoustic measures and studied how they are related to human perception of noisiness. Measures of harmonic-to-noise-ratio (HNR), percent voicing, and shimmer were found to be the best predictors of perceptual rating by human listeners. Both acoustics and perception indicated that noisiness was similar across coyote and dog barks, but within each species there was significant variation among the individual vocalizers. The advantages and disadvantages of the various measures are discussed.
Orthogonal control of expression mean and variance by epigenetic features at different genomic loci
Dey, Siddharth S.; Foley, Jonathan E.; Limsirichai, Prajit; ...
2015-05-05
While gene expression noise has been shown to drive dramatic phenotypic variations, the molecular basis for this variability in mammalian systems is not well understood. Gene expression has been shown to be regulated by promoter architecture and the associated chromatin environment. However, the exact contribution of these two factors in regulating expression noise has not been explored. Using a dual-reporter lentiviral model system, we deconvolved the influence of the promoter sequence to systematically study the contribution of the chromatin environment at different genomic locations in regulating expression noise. By integrating a large-scale analysis to quantify mRNA levels by smFISH andmore » protein levels by flow cytometry in single cells, we found that mean expression and noise are uncorrelated across genomic locations. Furthermore, we showed that this independence could be explained by the orthogonal control of mean expression by the transcript burst size and noise by the burst frequency. Finally, we showed that genomic locations displaying higher expression noise are associated with more repressed chromatin, thereby indicating the contribution of the chromatin environment in regulating expression noise.« less
NASA Technical Reports Server (NTRS)
Mccurdy, David A.
1988-01-01
Two experiments were conducted to quantify the annoyance of people to advanced turboprop (propfan) aircraft flyover noise. The objectives were to: (1) determine the effects on annoyance of various tonal characteristics; and (2) compare annoyance to advanced turboprops with annoyance to conventional turboprops and jets. A computer was used to produce realistic, time-varying simulations of advanced turboprop aircraft takeoff noise. In the first experiment, subjects judged the annoyance of 45 advanced turboprop noises in which the tonal content was systematically varied to represent the factorial combinations of five fundamental frequencies, three frequency envelope shapes, and three tone-to-broadband noise ratios. Each noise was presented at three sound levels. In the second experiment, 18 advanced turboprop takeoffs, 5 conventional turboprop takeoffs, and 5 conventional jet takeoffs were presented at three sound pressure levels to subjects. Analysis indicated that frequency envelope shape did not significantly affect annoyance. The interaction of fundamental frequency with tone-to-broadband noise ratio did have a large and complex effect on annoyance. The advanced turboprop stimuli were slightly less annoying than the conventional stimuli.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Detectability of radiological images: the influence of anatomical noise
NASA Astrophysics Data System (ADS)
Bochud, Francois O.; Verdun, Francis R.; Hessler, Christian; Valley, Jean-Francois
1995-04-01
Radiological image quality can be objectively quantified by the statistical decision theory. This theory is commonly applied with the noise of the imaging system alone (quantum, screen and film noises) whereas the actual noise present on the image is the 'anatomical noise' (sum of the system noise and the anatomical texture). This anatomical texture should play a role in the detection task. This paper compares these two kinds of noises by performing 2AFC experiments and computing the area under the ROC-curve. It is shown that the 'anatomical noise' cannot be considered as a noise in the sense of Wiener spectrum approach and that the detectability performance is the same as the one obtained with the system noise alone in the case of a small object to be detected. Furthermore, the statistical decision theory and the non- prewhitening observer does not match the experimental results. This is especially the case in the low contrast values for which the theory predicts an increase of the detectability as soon as the contrast is different from zero whereas the experimental result demonstrates an offset of the contrast value below which the detectability is purely random. The theory therefore needs to be improved in order to take this result into account.
Dunlop, Rebecca A; Noad, Michael J; McCauley, Robert D; Scott-Hayward, Lindsay; Kniest, Eric; Slade, Robert; Paton, David; Cato, Douglas H
2017-08-15
The effect of various anthropogenic sources of noise (e.g. sonar, seismic surveys) on the behaviour of marine mammals is sometimes quantified as a dose-response relationship, where the probability of an animal behaviourally 'responding' (e.g. avoiding the source) increases with 'dose' (or received level of noise). To do this, however, requires a definition of a 'significant' response (avoidance), which can be difficult to quantify. There is also the potential that the animal 'avoids' not only the source of noise but also the vessel operating the source, complicating the relationship. The proximity of the source is an important variable to consider in the response, yet difficult to account for given that received level and proximity are highly correlated. This study used the behavioural response of humpback whales to noise from two different air gun arrays (20 and 140 cubic inch air gun array) to determine whether a dose-response relationship existed. To do this, a measure of avoidance of the source was developed, and the magnitude (rather than probability) of this response was tested against dose. The proximity to the source, and the vessel itself, was included within the one-analysis model. Humpback whales were more likely to avoid the air gun arrays (but not the controls) within 3 km of the source at levels over 140 re. 1 µPa 2 s -1 , meaning that both the proximity and the received level were important factors and the relationship between dose (received level) and response is not a simple one. © 2017. Published by The Company of Biologists Ltd.
Belukha whale (delphinapterus leucas) responses to industrial noise in Nushagak Bay, Alaska: 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, B.S.; Awbrey, F.T.; Evans, W.E.
1983-01-01
Between 15 June and 14 July 1983 the authors conducted playback experiments with belukha whales in the Snake River, Alaska, using sounds recorded near an operating oil-drilling rig. The objectives of these experiments were to quantify behavioral responses of belukha whales to oil drilling noise in an area where foreign acoustic stimuli were absent, and to test the hypothesis that beluhka whales would not approach a source of loud sound.
Quantifying the intelligibility of speech in noise for non-native listeners.
van Wijngaarden, Sander J; Steeneken, Herman J M; Houtgast, Tammo
2002-04-01
When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the underlying perceptual and linguistic processing. An easy method is sought to extend these quantitative findings to other listener populations. Dutch subjects listening to Germans and English speech, ranging from reasonable to excellent proficiency in these languages, were found to require a 1-7 dB better speech-to-noise ratio to obtain 50% sentence intelligibility than native listeners. Also, the psychometric function for sentence recognition in noise was found to be shallower for non-native than for native listeners (worst-case slope around the 50% point of 7.5%/dB, compared to 12.6%/dB for native listeners). Differences between native and non-native speech intelligibility are largely predicted by linguistic entropy estimates as derived from a letter guessing task. Less effective use of context effects (especially semantic redundancy) explains the reduced speech intelligibility for non-native listeners. While measuring speech intelligibility for many different populations of listeners (languages, linguistic experience) may be prohibitively time consuming, obtaining predictions of non-native intelligibility from linguistic entropy may help to extend the results of this study to other listener populations.
Quantifying the intelligibility of speech in noise for non-native listeners
NASA Astrophysics Data System (ADS)
van Wijngaarden, Sander J.; Steeneken, Herman J. M.; Houtgast, Tammo
2002-04-01
When listening to languages learned at a later age, speech intelligibility is generally lower than when listening to one's native language. The main purpose of this study is to quantify speech intelligibility in noise for specific populations of non-native listeners, only broadly addressing the underlying perceptual and linguistic processing. An easy method is sought to extend these quantitative findings to other listener populations. Dutch subjects listening to Germans and English speech, ranging from reasonable to excellent proficiency in these languages, were found to require a 1-7 dB better speech-to-noise ratio to obtain 50% sentence intelligibility than native listeners. Also, the psychometric function for sentence recognition in noise was found to be shallower for non-native than for native listeners (worst-case slope around the 50% point of 7.5%/dB, compared to 12.6%/dB for native listeners). Differences between native and non-native speech intelligibility are largely predicted by linguistic entropy estimates as derived from a letter guessing task. Less effective use of context effects (especially semantic redundancy) explains the reduced speech intelligibility for non-native listeners. While measuring speech intelligibility for many different populations of listeners (languages, linguistic experience) may be prohibitively time consuming, obtaining predictions of non-native intelligibility from linguistic entropy may help to extend the results of this study to other listener populations.
NASA Astrophysics Data System (ADS)
Adamos, Dimitrios A.; Laskaris, Nikolaos A.; Micheloyannis, Sifis
2018-06-01
Objective. Music, being a multifaceted stimulus evolving at multiple timescales, modulates brain function in a manifold way that encompasses not only the distinct stages of auditory perception, but also higher cognitive processes like memory and appraisal. Network theory is apparently a promising approach to describe the functional reorganization of brain oscillatory dynamics during music listening. However, the music induced changes have so far been examined within the functional boundaries of isolated brain rhythms. Approach. Using naturalistic music, we detected the functional segregation patterns associated with different cortical rhythms, as these were reflected in the surface electroencephalography (EEG) measurements. The emerged structure was compared across frequency bands to quantify the interplay among rhythms. It was also contrasted against the structure from the rest and noise listening conditions to reveal the specific components stemming from music listening. Our methodology includes an efficient graph-partitioning algorithm, which is further utilized for mining prototypical modular patterns, and a novel algorithmic procedure for identifying ‘switching nodes’ (i.e. recording sites) that consistently change module during music listening. Main results. Our results suggest the multiplex character of the music-induced functional reorganization and particularly indicate the dependence between the networks reconstructed from the δ and β H rhythms. This dependence is further justified within the framework of nested neural oscillations and fits perfectly within the context of recently introduced cortical entrainment to music. Significance. Complying with the contemporary trends towards a multi-scale examination of the brain network organization, our approach specifies the form of neural coordination among rhythms during music listening. Considering its computational efficiency, and in conjunction with the flexibility of in situ electroencephalography, it may lead to novel assistive tools for real-life applications.
NASA Astrophysics Data System (ADS)
Wang, Lei; Schnurr, Alena-Kathrin; Zidowitz, Stephan; Georgii, Joachim; Zhao, Yue; Razavi, Mohammad; Schwier, Michael; Hahn, Horst K.; Hansen, Christian
2016-03-01
Segmentation of hepatic arteries in multi-phase computed tomography (CT) images is indispensable in liver surgery planning. During image acquisition, the hepatic artery is enhanced by the injection of contrast agent. The enhanced signals are often not stably acquired due to non-optimal contrast timing. Other vascular structure, such as hepatic vein or portal vein, can be enhanced as well in the arterial phase, which can adversely affect the segmentation results. Furthermore, the arteries might suffer from partial volume effects due to their small diameter. To overcome these difficulties, we propose a framework for robust hepatic artery segmentation requiring a minimal amount of user interaction. First, an efficient multi-scale Hessian-based vesselness filter is applied on the artery phase CT image, aiming to enhance vessel structures with specified diameter range. Second, the vesselness response is processed using a Bayesian classifier to identify the most probable vessel structures. Considering the vesselness filter normally performs not ideally on the vessel bifurcations or the segments corrupted by noise, two vessel-reconnection techniques are proposed. The first technique uses a directional morphological operator to dilate vessel segments along their centerline directions, attempting to fill the gap between broken vascular segments. The second technique analyzes the connectivity of vessel segments and reconnects disconnected segments and branches. Finally, a 3D vessel tree is reconstructed. The algorithm has been evaluated using 18 CT images of the liver. To quantitatively measure the similarities between segmented and reference vessel trees, the skeleton coverage and mean symmetric distance are calculated to quantify the agreement between reference and segmented vessel skeletons, resulting in an average of 0:55+/-0:27 and 12:7+/-7:9 mm (mean standard deviation), respectively.
Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.
2017-07-01
Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.
Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less
Adamos, Dimitrios A; Laskaris, Nikolaos A; Micheloyannis, Sifis
2018-06-01
Music, being a multifaceted stimulus evolving at multiple timescales, modulates brain function in a manifold way that encompasses not only the distinct stages of auditory perception, but also higher cognitive processes like memory and appraisal. Network theory is apparently a promising approach to describe the functional reorganization of brain oscillatory dynamics during music listening. However, the music induced changes have so far been examined within the functional boundaries of isolated brain rhythms. Using naturalistic music, we detected the functional segregation patterns associated with different cortical rhythms, as these were reflected in the surface electroencephalography (EEG) measurements. The emerged structure was compared across frequency bands to quantify the interplay among rhythms. It was also contrasted against the structure from the rest and noise listening conditions to reveal the specific components stemming from music listening. Our methodology includes an efficient graph-partitioning algorithm, which is further utilized for mining prototypical modular patterns, and a novel algorithmic procedure for identifying 'switching nodes' (i.e. recording sites) that consistently change module during music listening. Our results suggest the multiplex character of the music-induced functional reorganization and particularly indicate the dependence between the networks reconstructed from the δ and β H rhythms. This dependence is further justified within the framework of nested neural oscillations and fits perfectly within the context of recently introduced cortical entrainment to music. Complying with the contemporary trends towards a multi-scale examination of the brain network organization, our approach specifies the form of neural coordination among rhythms during music listening. Considering its computational efficiency, and in conjunction with the flexibility of in situ electroencephalography, it may lead to novel assistive tools for real-life applications.
Qiu, Wei; Hamernik, Roger P; Davis, Robert I
2013-05-01
A series of Gaussian and non-Gaussian equal energy noise exposures were designed with the objective of establishing the extent to which the kurtosis statistic could be used to grade the severity of noise trauma produced by the exposures. Here, 225 chinchillas distributed in 29 groups, with 6 to 8 animals per group, were exposed at 97 dB SPL. The equal energy exposures were presented either continuously for 5 d or on an interrupted schedule for 19 d. The non-Gaussian noises all differed in the level of the kurtosis statistic or in the temporal structure of the noise, where the latter was defined by different peak, interval, and duration histograms of the impact noise transients embedded in the noise signal. Noise-induced trauma was estimated from auditory evoked potential hearing thresholds and surface preparation histology that quantified sensory cell loss. Results indicated that the equal energy hypothesis is a valid unifying principle for estimating the consequences of an exposure if and only if the equivalent energy exposures had the same kurtosis. Furthermore, for the same level of kurtosis the detailed temporal structure of an exposure does not have a strong effect on trauma.
Multiscale permutation entropy analysis of EEG recordings during sevoflurane anesthesia
NASA Astrophysics Data System (ADS)
Li, Duan; Li, Xiaoli; Liang, Zhenhu; Voss, Logan J.; Sleigh, Jamie W.
2010-08-01
Electroencephalogram (EEG) monitoring of the effect of anesthetic drugs on the central nervous system has long been used in anesthesia research. Several methods based on nonlinear dynamics, such as permutation entropy (PE), have been proposed to analyze EEG series during anesthesia. However, these measures are still single-scale based and may not completely describe the dynamical characteristics of complex EEG series. In this paper, a novel measure combining multiscale PE information, called CMSPE (composite multi-scale permutation entropy), was proposed for quantifying the anesthetic drug effect on EEG recordings during sevoflurane anesthesia. Three sets of simulated EEG series during awake, light and deep anesthesia were used to select the parameters for the multiscale PE analysis: embedding dimension m, lag τ and scales to be integrated into the CMSPE index. Then, the CMSPE index and raw single-scale PE index were applied to EEG recordings from 18 patients who received sevoflurane anesthesia. Pharmacokinetic/pharmacodynamic (PKPD) modeling was used to relate the measured EEG indices and the anesthetic drug concentration. Prediction probability (Pk) statistics and correlation analysis with the response entropy (RE) index, derived from the spectral entropy (M-entropy module; GE Healthcare, Helsinki, Finland), were investigated to evaluate the effectiveness of the new proposed measure. It was found that raw single-scale PE was blind to subtle transitions between light and deep anesthesia, while the CMSPE index tracked these changes accurately. Around the time of loss of consciousness, CMSPE responded significantly more rapidly than the raw PE, with the absolute slopes of linearly fitted response versus time plots of 0.12 (0.09-0.15) and 0.10 (0.06-0.13), respectively. The prediction probability Pk of 0.86 (0.85-0.88) and 0.85 (0.80-0.86) for CMSPE and raw PE indicated that the CMSPE index correlated well with the underlying anesthetic effect. The correlation coefficient for the comparison between the CMSPE index and RE index of 0.84 (0.80-0.88) was significantly higher than the raw PE index of 0.75 (0.66-0.84). The results show that the CMSPE outperforms the raw single-scale PE in reflecting the sevoflurane drug effect on the central nervous system.
Southern Regional Center for Lightweight Innovative Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horstemeyer, Mark F.; Wang, Paul
The three major objectives of this Phase III project are: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantify microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios.
NASA Astrophysics Data System (ADS)
Lucas, Iris; Cotsaftis, Michel; Bertelle, Cyrille
2017-12-01
Multiagent systems (MAS) provide a useful tool for exploring the complex dynamics and behavior of financial markets and now MAS approach has been widely implemented and documented in the empirical literature. This paper introduces the implementation of an innovative multi-scale mathematical model for a computational agent-based financial market. The paper develops a method to quantify the degree of self-organization which emerges in the system and shows that the capacity of self-organization is maximized when the agent behaviors are heterogeneous. Numerical results are presented and analyzed, showing how the global market behavior emerges from specific individual behavior interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Ryan G.; Vance, Sean; Cattaneo, Richard
2014-08-15
Purpose: Iterative reconstruction (IR) reduces noise, thereby allowing dose reduction in computed tomography (CT) while maintaining comparable image quality to filtered back-projection (FBP). This study sought to characterize image quality metrics, delineation, dosimetric assessment, and other aspects necessary to integrate IR into treatment planning. Methods: CT images (Brilliance Big Bore v3.6, Philips Healthcare) were acquired of several phantoms using 120 kVp and 25–800 mAs. IR was applied at levels corresponding to noise reduction of 0.89–0.55 with respect to FBP. Noise power spectrum (NPS) analysis was used to characterize noise magnitude and texture. CT to electron density (CT-ED) curves were generatedmore » over all IR levels. Uniformity as well as spatial and low contrast resolution were quantified using a CATPHAN phantom. Task specific modulation transfer functions (MTF{sub task}) were developed to characterize spatial frequency across objects of varied contrast. A prospective dose reduction study was conducted for 14 patients undergoing interfraction CT scans for high-dose rate brachytherapy. Three physicians performed image quality assessment using a six-point grading scale between the normal-dose FBP (reference), low-dose FBP, and low-dose IR scans for the following metrics: image noise, detectability of the vaginal cuff/bladder interface, spatial resolution, texture, segmentation confidence, and overall image quality. Contouring differences between FBP and IR were quantified for the bladder and rectum via overlap indices (OI) and Dice similarity coefficients (DSC). Line profile and region of interest analyses quantified noise and boundary changes. For two subjects, the impact of IR on external beam dose calculation was assessed via gamma analysis and changes in digitally reconstructed radiographs (DRRs) were quantified. Results: NPS showed large reduction in noise magnitude (50%), and a slight spatial frequency shift (∼0.1 mm{sup −1}) with application of IR at L6. No appreciable changes were observed for CT-ED curves between FBP and IR levels [maximum difference ∼13 HU for bone (∼1% difference)]. For uniformity, differences were ∼1 HU between FBP and IR. Spatial resolution was well conserved; the largest MTF{sub task} decrease between FBP and IR levels was 0.08 A.U. No notable changes in low-contrast detectability were observed and CNR increased substantially with IR. For the patient study, qualitative image grading showed low-dose IR was equivalent to or slightly worse than normal dose FBP, and is superior to low-dose FBP (p < 0.001 for noise), although these did not translate to differences in CT number, contouring ability, or dose calculation. The largest CT number discrepancy from FBP occurred at a bone/tissue interface using the most aggressive IR level [−1.2 ± 4.9 HU (range: −17.6–12.5 HU)]. No clinically significant contour differences were found between IR and FBP, with OIs and DSCs ranging from 0.85 to 0.95. Negligible changes in dose calculation were observed. DRRs preserved anatomical detail with <2% difference in intensity from FBP combined with aggressive IRL6. Conclusions: These results support integrating IR into treatment planning. While slight degradation in edges and shift in texture were observed in phantom, patient results show qualitative image grading, contouring ability, and dosimetric parameters were not adversely affected.« less
Assessing the Effects of Multi-Node Sensor Network Configurations on the Operational Tempo
2014-09-01
receiver, nP is the noise power of the receiver, and iL is the implementation loss of the receiver due to hardware manufacturing. The received...13. ABSTRACT (maximum 200 words) The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations by...INTENTIONALLY LEFT BLANK v ABSTRACT The LPISimNet software tool provides the capability to quantify the performance of sensor network configurations
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.
Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.
UHB Engine Fan Broadband Noise Reduction Study
NASA Technical Reports Server (NTRS)
Gliebe, Philip R.; Ho, Patrick Y.; Mani, Ramani
1995-01-01
A study has been completed to quantify the contribution of fan broadband noise to advanced high bypass turbofan engine system noise levels. The result suggests that reducing fan broadband noise can produce 3 to 4 EPNdB in engine system noise reduction, once the fan tones are eliminated. Further, in conjunction with the elimination of fan tones and an increase in bypass ratio, a potential reduction of 7 to 10 EPNdB in system noise can be achieved. In addition, an initial assessment of engine broadband noise source mechanisms has been made, concluding that the dominant source of fan broadband noise is the interaction of incident inlet boundary layer turbulence with the fan rotor. This source has two contributors, i.e., unsteady life dipole response and steady loading quadrupole response. The quadrupole contribution was found to be the most important component, suggesting that broadband noise reduction can be achieved by the reduction of steady loading field-turbulence field quadrupole interaction. Finally, for a controlled experimental quantification and verification, the study recommends that further broadband noise tests be done on a simulated engine rig, such as the GE Aircraft Engine Universal Propulsion Simulator, rather than testing on an engine statically in an outdoor arena The rig should be capable of generating forward and aft propagating fan noise, and it needs to be tested in a large freejet or a wind tunnel.
UHB engine fan broadband noise reduction study
NASA Astrophysics Data System (ADS)
Gliebe, Philip R.; Ho, Patrick Y.; Mani, Ramani
1995-06-01
A study has been completed to quantify the contribution of fan broadband noise to advanced high bypass turbofan engine system noise levels. The result suggests that reducing fan broadband noise can produce 3 to 4 EPNdB in engine system noise reduction, once the fan tones are eliminated. Further, in conjunction with the elimination of fan tones and an increase in bypass ratio, a potential reduction of 7 to 10 EPNdB in system noise can be achieved. In addition, an initial assessment of engine broadband noise source mechanisms has been made, concluding that the dominant source of fan broadband noise is the interaction of incident inlet boundary layer turbulence with the fan rotor. This source has two contributors, i.e., unsteady life dipole response and steady loading quadrupole response. The quadrupole contribution was found to be the most important component, suggesting that broadband noise reduction can be achieved by the reduction of steady loading field-turbulence field quadrupole interaction. Finally, for a controlled experimental quantification and verification, the study recommends that further broadband noise tests be done on a simulated engine rig, such as the GE Aircraft Engine Universal Propulsion Simulator, rather than testing on an engine statically in an outdoor arena The rig should be capable of generating forward and aft propagating fan noise, and it needs to be tested in a large freejet or a wind tunnel.
Zhao, Feihu; Vaughan, Ted J; Mcnamara, Laoise M
2015-04-01
Recent studies have shown that mechanical stimulation, by means of flow perfusion and mechanical compression (or stretching), enhances osteogenic differentiation of mesenchymal stem cells and bone cells within biomaterial scaffolds in vitro. However, the precise mechanisms by which such stimulation enhances bone regeneration is not yet fully understood. Previous computational studies have sought to characterise the mechanical stimulation on cells within biomaterial scaffolds using either computational fluid dynamics or finite element (FE) approaches. However, the physical environment within a scaffold under perfusion is extremely complex and requires a multiscale and multiphysics approach to study the mechanical stimulation of cells. In this study, we seek to determine the mechanical stimulation of osteoblasts seeded in a biomaterial scaffold under flow perfusion and mechanical compression using multiscale modelling by two-way fluid-structure interaction and FE approaches. The mechanical stimulation, in terms of wall shear stress (WSS) and strain in osteoblasts, is quantified at different locations within the scaffold for cells of different attachment morphologies (attached, bridged). The results show that 75.4 % of scaffold surface has a WSS of 0.1-10 mPa, which indicates the likelihood of bone cell differentiation at these locations. For attached and bridged osteoblasts, the maximum strains are 397 and 177,200 με, respectively. Additionally, the results from mechanical compression show that attached cells are more stimulated (maximum strain = 22,600 με) than bridged cells (maximum strain = 10.000 με)Such information is important for understanding the biological response of osteoblasts under in vitro stimulation. Finally, a combination of perfusion and compression of a tissue engineering scaffold is suggested for osteogenic differentiation.
May, Christian P; Kolokotroni, Eleni; Stamatakos, Georgios S; Büchler, Philippe
2011-10-01
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Extremes and bursts in complex multi-scale plasmas
NASA Astrophysics Data System (ADS)
Watkins, N. W.; Chapman, S. C.; Hnat, B.
2012-04-01
Quantifying the spectrum of sizes and durations of large and/or long-lived fluctuations in complex, multi-scale, space plasmas is a topic of both theoretical and practical importance. The predictions of inherently multi-scale physical theories such as MHD turbulence have given one direct stimulus for its investigation. There are also space weather implications to an improved ability to assess the likelihood of an extreme fluctuation of a given size. Our intuition as scientists tends to be formed on the familiar Gaussian "normal" distribution, which has a very low likelihood of extreme fluctuations. Perhaps surprisingly, there is both theoretical and observational evidence that favours non-Gaussian, heavier-tailed, probability distributions for some space physics datasets. Additionally there is evidence for the existence of long-ranged memory between the values of fluctuations. In this talk I will show how such properties can be captured in a preliminary way by a self-similar, fractal model. I will show how such a fractal model can be used to make predictions for experimental accessible quantities like the size and duration of a buurst (a sequence of values that exceed a given threshold), or the survival probability of a burst [c.f. preliminary results in Watkins et al, PRE, 2009]. In real-world time series scaling behaviour need not be "mild" enough to be captured by a single self-similarity exponent H, but might instead require a "wild" multifractal spectrum of scaling exponents [e.g. Rypdal and Rypdal, JGR, 2011; Moloney and Davidsen, JGR, 2011] to give a complete description. I will discuss preliminary work on extending the burst approach into the multifractal domain [see also Watkins et al, chapter in press for AGU Chapman Conference on Complexity and Extreme Events in the Geosciences, Hyderabad].
Ding, Jiao; Jiang, Yuan; Liu, Qi; Hou, Zhaojiang; Liao, Jianyu; Fu, Lan; Peng, Qiuzhi
2016-05-01
Understanding the relationships between land use patterns and water quality in low-order streams is useful for effective landscape planning to protect downstream water quality. A clear understanding of these relationships remains elusive due to the heterogeneity of land use patterns and scale effects. To better assess land use influences, we developed empirical models relating land use patterns to the water quality of low-order streams at different geomorphic regions across multi-scales in the Dongjiang River basin using multivariate statistical analyses. The land use pattern was quantified in terms of the composition, configuration and hydrological distance of land use types at the reach buffer, riparian corridor and catchment scales. Water was sampled under summer base flow at 56 low-order catchments, which were classified into two homogenous geomorphic groups. The results indicated that the water quality of low-order streams was most strongly affected by the configuration metrics of land use. Poorer water quality was associated with higher patch densities of cropland, orchards and grassland in the mountain catchments, whereas it was associated with a higher value for the largest patch index of urban land use in the plain catchments. The overall water quality variation was explained better by catchment scale than by riparian- or reach-scale land use, whereas the spatial scale over which land use influenced water quality also varied across specific water parameters and the geomorphic basis. Our study suggests that watershed management should adopt better landscape planning and multi-scale measures to improve water quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Taitelbaum-Swead, Riki; Fostick, Leah
2016-01-01
Everyday life includes fluctuating noise levels, resulting in continuously changing speech intelligibility. The study aims were: (1) to quantify the amount of decrease in age-related speech perception, as a result of increasing noise level, and (2) to test the effect of age on context usage at the word level (smaller amount of contextual cues). A total of 24 young adults (age 20-30 years) and 20 older adults (age 60-75 years) were tested. Meaningful and nonsense one-syllable consonant-vowel-consonant words were presented with the background noise types of speech noise (SpN), babble noise (BN), and white noise (WN), with a signal-to-noise ratio (SNR) of 0 and -5 dB. Older adults had lower accuracy in SNR = 0, with WN being the most difficult condition for all participants. Measuring the change in speech perception when SNR decreased showed a reduction of 18.6-61.5% in intelligibility, with age effect only for BN. Both young and older adults used less phonemic context with WN, as compared to other conditions. Older adults are more affected by an increasing noise level of fluctuating informational noise as compared to steady-state noise. They also use less contextual cues when perceiving monosyllabic words. Further studies should take into consideration that when presenting the stimulus differently (change in noise level, less contextual cues), other perceptual and cognitive processes are involved. © 2016 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
An experimental study of noise in mid-infrared quantum cascade lasers of different designs
NASA Astrophysics Data System (ADS)
Schilt, Stéphane; Tombez, Lionel; Tardy, Camille; Bismuto, Alfredo; Blaser, Stéphane; Maulini, Richard; Terazzi, Romain; Rochat, Michel; Südmeyer, Thomas
2015-04-01
We present an experimental study of noise in mid-infrared quantum cascade lasers (QCLs) of different designs. By quantifying the high degree of correlation occurring between fluctuations of the optical frequency and voltage between the QCL terminals, we show that electrical noise is a powerful and simple mean to study noise in QCLs. Based on this outcome, we investigated the electrical noise in a large set of 22 QCLs emitting in the range of 7.6-8 μm and consisting of both ridge-waveguide and buried-heterostructure (BH) lasers with different geometrical designs and operation parameters. From a statistical data processing based on an analysis of variance, we assessed that ridge-waveguide lasers have a lower noise than BH lasers. Our physical interpretation is that additional current leakages or spare injection channels occur at the interface between the active region and the lateral insulator in the BH geometry, which induces some extra noise. In addition, Schottky-type contacts occurring at the interface between the n-doped regions and the lateral insulator, i.e., iron-doped InP, are also believed to be a potential source of additional noise in some BH lasers, as observed from the slight reduction in the integrated voltage noise observed at the laser threshold in several BH-QCLs.
Environmental noise levels affect the activity budget of the Florida manatee
NASA Astrophysics Data System (ADS)
Miksis-Olds, Jennifer L.; Donaghay, Percy L.; Miller, James H.; Tyack, Peter L.
2005-09-01
Manatees inhabit coastal bays, lagoons, and estuaries because they are dependent on the aquatic vegetation that grows in shallow waters. Food requirements force manatees to occupy the same areas in which human activities are the greatest. Noise produced from human activities has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. This study quantifies the behavioral responses of manatees to both changing levels of ambient noise and transient noise sources. Results indicate that elevated environmental noise levels do affect the overall activity budget of this species. The proportion of time manatees spend feeding, milling, and traveling in critical habitats changed as a function of noise level. More time was spent in the directed, goal-oriented behaviors of feeding and traveling, while less time was spent milling when noise levels were highest. The animals also responded to the transient noise of approaching vessels with changes in behavioral state and movements out of the geographical area. This suggests that manatees detect and respond to changes in environmental noise levels. Whether these changes legally constitute harassment and produce biologically significant effects need to be addressed with hypothesis-driven experiments and long-term monitoring. [For Animal Bioacoustics Best Student Paper Award.
Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media
NASA Astrophysics Data System (ADS)
Russian, Anna; Dentz, Marco; Gouze, Philippe
2017-08-01
Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.
Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference
NASA Technical Reports Server (NTRS)
Shah, B.; Hinedi, S.
1989-01-01
The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations.
Emergence of the Green’s Functions from Noise and Passive Acoustic Remote Sensing of Ocean Dynamics
2009-09-30
Acoustic Remote Sensing of Ocean Dynamics Oleg A. Godin CIRES/Univ. of Colorado and NOAA/OAR/Earth System Research Lab., R/PSD99, 325 Broadway...characterization of a time-varying ocean where ambient acoustic noise is utilized as a probing signal. • To develop a passive remote sensing technique for...inapplicable. 3. To quantify degradation of performance of passive remote sensing techniques due to ocean surface motion and other variations of underwater
NASA Astrophysics Data System (ADS)
Zan, Hao; Li, Haowei; Jiang, Yuguang; Wu, Meng; Zhou, Weixing; Bao, Wen
2018-06-01
As part of our efforts to find ways and means to further improve the regenerative cooling technology in scramjet, the experiments of thermo-acoustic instability dynamic characteristics of hydrocarbon fuel flowing have been conducted in horizontal circular tubes at different conditions. The experimental results indicate that there is a developing process from thermo-acoustic stability to instability. In order to have a deep understanding on the developing process of thermo-acoustic instability, the method of Multi-scale Shannon Wavelet Entropy (MSWE) based on Wavelet Transform Correlation Filter (WTCF) and Multi-Scale Shannon Entropy (MSE) is adopted in this paper. The results demonstrate that the developing process of thermo-acoustic instability from noise and weak signals is well detected by MSWE method and the differences among the stability, the developing process and the instability can be identified. These properties render the method particularly powerful for warning thermo-acoustic instability of hydrocarbon fuel flowing in scramjet cooling channels. The mass flow rate and the inlet pressure will make an influence on the developing process of the thermo-acoustic instability. The investigation on thermo-acoustic instability dynamic characteristics at supercritical pressure based on wavelet entropy method offers guidance on the control of scramjet fuel supply, which can secure stable fuel flowing in regenerative cooling system.
Adaptive cornea modeling from keratometric data.
Martínez-Finkelshtein, Andrei; López, Darío Ramos; Castro, Gracia M; Alió, Jorge L
2011-07-01
To introduce an iterative, multiscale procedure that allows for better reconstruction of the shape of the anterior surface of the cornea from altimetric data collected by a corneal topographer. The report describes, first, an adaptive, multiscale mathematical algorithm for the parsimonious fit of the corneal surface data that adapts the number of functions used in the reconstruction to the conditions of each cornea. The method also implements a dynamic selection of the parameters and the management of noise. Then, several numerical experiments are performed, comparing it with the results obtained by the standard Zernike-based procedure. The numerical experiments showed that the algorithm exhibits steady exponential error decay, independent of the level of aberration of the cornea. The complexity of each anisotropic Gaussian-basis function in the functional representation is the same, but the parameters vary to fit the current scale. This scale is determined only by the residual errors and not by the number of the iteration. Finally, the position and clustering of the centers, as well as the size of the shape parameters, provides additional spatial information about the regions of higher irregularity. The methodology can be used for the real-time reconstruction of both altimetric data and corneal power maps from the data collected by keratoscopes, such as the Placido ring-based topographers, that will be decisive in early detection of corneal diseases such as keratoconus.
The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, Y. L.
2017-02-01
The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.
NASA Astrophysics Data System (ADS)
Cvetkovic, Sascha D.; Schirris, Johan; de With, Peter H. N.
2009-01-01
For real-time imaging in surveillance applications, visibility of details is of primary importance to ensure customer confidence. If we display High Dynamic-Range (HDR) scenes whose contrast spans four or more orders of magnitude on a conventional monitor without additional processing, results are unacceptable. Compression of the dynamic range is therefore a compulsory part of any high-end video processing chain because standard monitors are inherently Low- Dynamic Range (LDR) devices with maximally two orders of display dynamic range. In real-time camera processing, many complex scenes are improved with local contrast enhancements, bringing details to the best possible visibility. In this paper, we show how a multi-scale high-frequency enhancement scheme, in which gain is a non-linear function of the detail energy, can be used for the dynamic range compression of HDR real-time video camera signals. We also show the connection of our enhancement scheme to the processing way of the Human Visual System (HVS). Our algorithm simultaneously controls perceived sharpness, ringing ("halo") artifacts (contrast) and noise, resulting in a good balance between visibility of details and non-disturbance of artifacts. The overall quality enhancement, suitable for both HDR and LDR scenes, is based on a careful selection of the filter types for the multi-band decomposition and a detailed analysis of the signal per frequency band.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
Noise and correlations in a microwave-mechanical-optical transducer
NASA Astrophysics Data System (ADS)
Higginbotham, Andrew P.; Burns, Peter S.; Peterson, Robert W.; Urmey, Maxwell D.; Kampel, Nir S.; Menke, Timothy; Cicak, Katarina; Simmonds, Raymond W.; Regal, Cindy A.; Lehnert, Konrad W.
Viewed as resources for quantum information processing, microwave and optical fields offer complementary strengths. We simultaneously couple one mode of a micromechanical oscillator to a resonant microwave circuit and a high-finesse optical cavity. In previous work, this system was operated as a classical converter between microwave and optical signals at 4 K, operating with 10% efficiency and 1500 photons of added noise. To improve noise performance, we now operate the converter at 0.1 K. We have observed order-of-magnitude improvement in noise performance, and quantified effects from undesired interactions between the laser and superconducting circuit. Correlations between the microwave and optical fields have also been investigated, serving as a precursor to upcoming quantum operation. We acknowledge support from AFOSR MURI Grant FA9550-15-1-0015 and PFC National Science Foundation Grant 1125844.
Quantification of brain tissue through incorporation of partial volume effects
NASA Astrophysics Data System (ADS)
Gage, Howard D.; Santago, Peter, II; Snyder, Wesley E.
1992-06-01
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Towards a global-scale ambient noise cross-correlation data base
NASA Astrophysics Data System (ADS)
Ermert, Laura; Fichtner, Andreas; Sleeman, Reinoud
2014-05-01
We aim to obtain a global-scale data base of ambient seismic noise correlations. This database - to be made publicly available at ORFEUS - will enable us to study the distribution of microseismic and hum sources, and to perform multi-scale full waveform inversion for crustal and mantle structure. Ambient noise tomography has developed into a standard technique. According to theory, cross-correlations equal inter-station Green's functions only if the wave field is equipartitioned or the sources are isotropically distributed. In an attempt to circumvent these assumptions, we aim to investigate possibilities to directly model noise cross-correlations and invert for their sources using adjoint techniques. A data base containing correlations of 'gently' preprocessed noise, excluding preprocessing steps which are explicitly taken to reduce the influence of a non-isotropic source distribution like spectral whitening, is a key ingredient in this undertaking. Raw data are acquired from IRIS/FDSN and ORFEUS. We preprocess and correlate the time series using a tool based on the Python package Obspy which is run in parallel on a cluster of the Swiss National Supercomputing Centre. Correlation is done in two ways: Besides the classical cross-correlation function, the phase cross-correlation is calculated, which is an amplitude-independent measure of waveform similarity and therefore insensitive to high-energy events. Besides linear stacks of these correlations, instantaneous phase stacks are calculated which can be applied as optional weight, enhancing coherent portions of the traces and facilitating the emergence of a meaningful signal. The _STS1 virtual network by IRIS contains about 250 globally distributed stations, several of which have been operating for more than 20 years. It is the first data collection we will use for correlations in the hum frequency range, as the STS-1 instrument response is flat in the largest part of the period range where hum is observed, up to a period of about 300 seconds. Thus they provide us with the best-suited measurements for hum.
Quantifying the abnormal hemodynamics of sickle cell anemia
NASA Astrophysics Data System (ADS)
Lei, Huan; Karniadakis, George
2012-02-01
Sickle red blood cells (SS-RBC) exhibit heterogeneous morphologies and abnormal hemodynamics in deoxygenated states. A multi-scale model for SS-RBC is developed based on the Dissipative Particle Dynamics (DPD) method. Different cell morphologies (sickle, granular, elongated shapes) typically observed in deoxygenated states are constructed and quantified by the Asphericity and Elliptical shape factors. The hemodynamics of SS-RBC suspensions is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. Moreover, SS-RBCs exhibit abnormal adhesive interactions with both the vessel endothelium cells and the leukocytes. The effect of the abnormal adhesive interactions on the hemodynamics of sickle blood is investigated using the current model. It is found that both the SS-RBC - endothelium and the SS-RBC - leukocytes interactions, can potentially trigger the vicious ``sickling and entrapment'' cycles, resulting in vaso-occlusion phenomena widely observed in micro-circulation experiments.
NASA Astrophysics Data System (ADS)
Chiu, Hung-Chih; Lin, Yen-Hung; Lo, Men-Tzung; Tang, Sung-Chun; Wang, Tzung-Dau; Lu, Hung-Chun; Ho, Yi-Lwun; Ma, Hsi-Pin; Peng, Chung-Kang
2015-08-01
The hierarchical interaction between electrical signals of the brain and heart is not fully understood. We hypothesized that the complexity of cardiac electrical activity can be used to predict changes in encephalic electricity after stress. Most methods for analyzing the interaction between the heart rate variability (HRV) and electroencephalography (EEG) require a computation-intensive mathematical model. To overcome these limitations and increase the predictive accuracy of human relaxing states, we developed a method to test our hypothesis. In addition to routine linear analysis, multiscale entropy and detrended fluctuation analysis of the HRV were used to quantify nonstationary and nonlinear dynamic changes in the heart rate time series. Short-time Fourier transform was applied to quantify the power of EEG. The clinical, HRV, and EEG parameters of postcatheterization EEG alpha waves were analyzed using change-score analysis and generalized additive models. In conclusion, the complexity of cardiac electrical signals can be used to predict EEG changes after stress.
MorphoGraphX: A platform for quantifying morphogenesis in 4D.
Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S
2015-05-06
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.
Chiu, Hung-Chih; Lin, Yen-Hung; Lo, Men-Tzung; Tang, Sung-Chun; Wang, Tzung-Dau; Lu, Hung-Chun; Ho, Yi-Lwun; Ma, Hsi-Pin; Peng, Chung-Kang
2015-01-01
The hierarchical interaction between electrical signals of the brain and heart is not fully understood. We hypothesized that the complexity of cardiac electrical activity can be used to predict changes in encephalic electricity after stress. Most methods for analyzing the interaction between the heart rate variability (HRV) and electroencephalography (EEG) require a computation-intensive mathematical model. To overcome these limitations and increase the predictive accuracy of human relaxing states, we developed a method to test our hypothesis. In addition to routine linear analysis, multiscale entropy and detrended fluctuation analysis of the HRV were used to quantify nonstationary and nonlinear dynamic changes in the heart rate time series. Short-time Fourier transform was applied to quantify the power of EEG. The clinical, HRV, and EEG parameters of postcatheterization EEG alpha waves were analyzed using change-score analysis and generalized additive models. In conclusion, the complexity of cardiac electrical signals can be used to predict EEG changes after stress. PMID:26286628
Community Multiscale Air Quality Model
The U.S. EPA developed the Community Multiscale Air Quality (CMAQ) system to apply a “one atmosphere” multiscale and multi-pollutant modeling approach based mainly on the “first principles” description of the atmosphere. The multiscale capability is supported by the governing di...
Scale-free avalanche dynamics in the stock market
NASA Astrophysics Data System (ADS)
Bartolozzi, M.; Leinweber, D. B.; Thomas, A. W.
2006-10-01
Self-organized criticality (SOC) has been claimed to play an important role in many natural and social systems. In the present work we empirically investigate the relevance of this theory to stock-market dynamics. Avalanches in stock-market indices are identified using a multi-scale wavelet-filtering analysis designed to remove Gaussian noise from the index. Here, new methods are developed to identify the optimal filtering parameters which maximize the noise removal. The filtered time series is reconstructed and compared with the original time series. A statistical analysis of both high-frequency Nasdaq E-mini Futures and daily Dow Jones data is performed. The results of this new analysis confirm earlier results revealing a robust power-law behaviour in the probability distribution function of the sizes, duration and laminar times between avalanches. This power-law behaviour holds the potential to be established as a stylized fact of stock market indices in general. While the memory process, implied by the power-law distribution of the laminar times, is not consistent with classical models for SOC, we note that a power-law distribution of the laminar times cannot be used to rule out self-organized critical behaviour.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme.
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation. PMID:25709942
Infrared fix pattern noise reduction method based on Shearlet Transform
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Zhao, Dong; Cheng, Kuanhong; Qian, Kun; Qin, Hanlin
2018-06-01
The non-uniformity correction (NUC) is an effective way to reduce fix pattern noise (FPN) and improve infrared image quality. The temporal high-pass NUC method is a kind of practical NUC method because of its simple implementation. However, traditional temporal high-pass NUC methods rely deeply on the scene motion and suffer image ghosting and blurring. Thus, this paper proposes an improved NUC method based on Shearlet Transform (ST). First, the raw infrared image is decomposed into multiscale and multi-orientation subbands by ST and the FPN component mainly exists in some certain high-frequency subbands. Then, high-frequency subbands are processed by the temporal filter to extract the FPN due to its low-frequency characteristics. Besides, each subband has a confidence parameter to determine the degree of FPN, which is estimated by the variance of subbands adaptively. At last, the process of NUC is achieved by subtracting the estimated FPN component from the original subbands and the corrected infrared image can be obtained by the inverse ST. The performance of the proposed method is evaluated with real and synthetic infrared image sequences thoroughly. Experimental results indicate that the proposed method can reduce heavily FPN with less roughness and RMSE.
A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference
NASA Astrophysics Data System (ADS)
Kolb, J.; Lekic, V.
2012-12-01
Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301
NASA Technical Reports Server (NTRS)
Mccurdy, David A.
1988-01-01
A laboratory experiment was conducted to quantify the annoyance of people to the flyover noise of advanced turboprop aircraft with counter-rotating propellers (CRP) having an equal number of blades on each rotor. The objectives were: to determine the effects of total content on annoyance; and compare annoyance to n x n CRP advanced turboprop aircraft with annoyance to conventional turboprop and jet aircraft. A computer synthesis system was used to generate 27 realistic, time-varying simulations of advanced turboprop takeoff noise in which the tonal content was systematically varied to represent the factorial combinations of nine fundamental frequencies and three tone-to-broadband noise ratios. These advanced turboprop simulations along with recordings of five conventional turboprop takeoffs and five conventional jet takeoffs were presented at three D-weighted sound pressure levels to 64 subjects in an anechoic chamber. Analyses of the subjects' annoyance judgments compare the three aircraft types and examined the effects of the differences in tonal content among the advanced turboprop noises. The annoyance prediction ability of various noise metrics is also examined.
A vessel noise budget for Admiralty Inlet, Puget Sound, Washington (USA).
Bassett, Christopher; Polagye, Brian; Holt, Marla; Thomson, Jim
2012-12-01
One calendar year of Automatic Identification System (AIS) ship-traffic data was paired with hydrophone recordings to assess ambient noise in northern Admiralty Inlet, Puget Sound, WA (USA) and to quantify the contribution of vessel traffic. The study region included inland waters of the Salish Sea within a 20 km radius of the hydrophone deployment site. Spectra and hourly, daily, and monthly ambient noise statistics for unweighted broadband (0.02-30 kHz) and marine mammal, or M-weighted, sound pressure levels showed variability driven largely by vessel traffic. Over the calendar year, 1363 unique AIS transmitting vessels were recorded, with at least one AIS transmitting vessel present in the study area 90% of the time. A vessel noise budget was calculated for all vessels equipped with AIS transponders. Cargo ships were the largest contributor to the vessel noise budget, followed by tugs and passenger vessels. A simple model to predict received levels at the site based on an incoherent summation of noise from different vessels resulted in a cumulative probability density function of broadband sound pressure levels that shows good agreement with 85% of the temporal data.
A decision-support tool for the control of urban noise pollution.
Suriano, Marcia Thais; de Souza, Léa Cristina Lucas; da Silva, Antonio Nelson Rodrigues
2015-07-01
Improving the quality of life is increasingly seen as an important urban planning goal. In order to reach it, various tools are being developed to mitigate the negative impacts of human activities on society. This paper develops a methodology for quantifying the population's exposure to noise, by proposing a classification of urban blocks. Taking into account the vehicular flow and traffic composition of the surroundings of urban blocks, we generated a noise map by applying a computational simulation. The urban blocks were classified according to their noise range and then the population was estimated for each urban block, by a process which was based on the census tract and the constructed area of the blocks. The acoustical classes of urban blocks and the number of inhabitants per block were compared, so that the population exposed to noise levels above 65 dB(A) could be estimated, which is the highest limit established by legislation. As a result, we developed a map of the study area, so that urban blocks that should be priority targets for noise mitigation actions can be quickly identified.
Perceptual learning for speech in noise after application of binary time-frequency masks
Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.
2013-01-01
Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038
An improved robust blind motion de-blurring algorithm for remote sensing images
NASA Astrophysics Data System (ADS)
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
Vehicle track segmentation using higher order random fields
Quach, Tu -Thach
2017-01-09
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
Vehicle track segmentation using higher order random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu -Thach
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Regulski, Krzysztof
2016-08-01
We present a process of semantic meta-model development for data management in an adaptable multiscale modeling framework. The main problems in ontology design are discussed, and a solution achieved as a result of the research is presented. The main concepts concerning the application and data management background for multiscale modeling were derived from the AM3 approach—object-oriented Agile multiscale modeling methodology. The ontological description of multiscale models enables validation of semantic correctness of data interchange between submodels. We also present a possibility of using the ontological model as a supervisor in conjunction with a multiscale model controller and a knowledge base system. Multiscale modeling formal ontology (MMFO), designed for describing multiscale models' data and structures, is presented. A need for applying meta-ontology in the MMFO development process is discussed. Examples of MMFO application in describing thermo-mechanical treatment of metal alloys are discussed. Present and future applications of MMFO are described.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem
2017-01-01
Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.
Reduction of Altitude Diffuser Jet Noise Using Water Injection
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Saunders, Grady P.; Langford, Lester A.
2014-01-01
A feasibility study on the effects of injecting water into the exhaust plume of an altitude rocket diffuser for the purpose of reducing the far-field acoustic noise has been performed. Water injection design parameters such as axial placement, angle of injection, diameter of injectors, and mass flow rate of water have been systematically varied during the operation of a subscale altitude test facility. The changes in acoustic far-field noise were measured with an array of free-field microphones in order to quantify the effects of the water injection on overall sound pressure level spectra and directivity. The results showed significant reductions in noise levels were possible with optimum conditions corresponding to water injection at or just upstream of the exit plane of the diffuser. Increasing the angle and mass flow rate of water injection also showed improvements in noise reduction. However, a limit on the maximum water flow rate existed as too large of flow rate could result in un-starting the supersonic diffuser.
Method to manage integration error in the Green-Kubo method.
Oliveira, Laura de Sousa; Greaney, P Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.