Triantafyllou, Christina; Polimeni, Jonathan R; Keil, Boris; Wald, Lawrence L
2016-12-01
Physiological nuisance fluctuations ("physiological noise") are a major contribution to the time-series signal-to-noise ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR 0 ), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. We extend the theoretical relationship between tSNR and SNR 0 to include a time-series noise covariance matrix Ψ t , distinct from the thermal noise covariance matrix Ψ 0 , and compare its structure to Ψ 0 and the signal coupling matrix SS H formed from the signal intensity vectors S. Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR 0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ 0 or SS H . Time-series noise covariances in array coils are found to differ from Ψ 0 and more surprisingly, from the signal coupling matrix SS H . Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. Magn Reson Med 76:1708-1719, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Coil-to-coil physiological noise correlations and their impact on fMRI time-series SNR
Triantafyllou, C.; Polimeni, J. R.; Keil, B.; Wald, L. L.
2017-01-01
Purpose Physiological nuisance fluctuations (“physiological noise”) are a major contribution to the time-series Signal to Noise Ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR0), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. Theory and Methods We extend the theoretical relationship between tSNR and SNR0 to include a time-series noise covariance matrix Ψt, distinct from the thermal noise covariance matrix Ψ0, and compare its structure to Ψ0 and the signal coupling matrix SSH formed from the signal intensity vectors S. Results Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ0 or SSH. Conclusion Time-series noise covariances in array coils are found to differ from Ψ0 and more surprisingly, from the signal coupling matrix SSH. Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. PMID:26756964
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Golafshan, Reza; Yuce Sanliturk, Kenan
2016-03-01
Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.
2016-06-01
index. The covariance matrix associated with the disctrete-time process noise vector [ ωdφ(k) ωdf (k) ]T is Qdt (k) = [ SφT + T 3 3 Sf T 2 2 Sf T 2 2 Sf...time process noise covariance matrix , scaled to metres, is shown on page 153 of [1]. It is Qd (k) = c 2Qdt (k) = [ 0.0114 0.0019 0.0019 0.0039 ] (8...somewhat, a shorthand notation is used where appropriate; viz., consider an m × n matrix A, with elements aij (k) , i = 1, ..,m, j = 1, .., n, then
NASA Astrophysics Data System (ADS)
Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui
2018-05-01
We address the optimal configuration of a partial Mueller matrix polarimeter used to determine the ellipsometric parameters in the presence of additive Gaussian noise and signal-dependent shot noise. The numerical results show that, for the PSG/PSA consisting of a variable retarder and a fixed polarizer, the detection process immune to these two types of noise can be optimally composed by 121.2° retardation with a pair of azimuths ±71.34° and a 144.48° retardation with a pair of azimuths ±31.56° for four Mueller matrix elements measurement. Compared with the existing configurations, the configuration presented in this paper can effectively decrease the measurement variance and thus statistically improve the measurement precision of the ellipsometric parameters.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
[Characteristics, advantages, and limits of matrix tests].
Brand, T; Wagener, K C
2017-03-01
Deterioration of communication abilities due to hearing problems is particularly relevant in listening situations with noise. Therefore, speech intelligibility tests in noise are required for audiological diagnostics and evaluation of hearing rehabilitation. This study analyzed the characteristics of matrix tests assessing the 50 % speech recognition threshold in noise. What are their advantages and limitations? Matrix tests are based on a matrix of 50 words (10 five-word sentences with same grammatical structure). In the standard setting, 20 sentences are presented using an adaptive procedure estimating the individual 50 % speech recognition threshold in noise. At present, matrix tests in 17 different languages are available. A high international comparability of matrix tests exists. The German language matrix test (OLSA, male speaker) has a reference 50 % speech recognition threshold of -7.1 (± 1.1) dB SNR. Before using a matrix test for the first time, the test person has to become familiar with the basic speech material using two training lists. Hereafter, matrix tests produce constant results even if repeated many times. Matrix tests are suitable for users of hearing aids and cochlear implants, particularly for assessment of benefit during the fitting process. Matrix tests can be performed in closed form and consequently with non-native listeners, even if the experimenter does not speak the test person's native language. Short versions of matrix tests are available for listeners with a shorter memory span, e.g., children.
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
Digital radiology using active matrix readout: amplified pixel detector array for fluoroscopy.
Matsuura, N; Zhao, W; Huang, Z; Rowlands, J A
1999-05-01
Active matrix array technology has made possible the concept of flat panel imaging systems for radiography. In the conventional approach a thin-film circuit built on glass contains the necessary switching components (thin-film transistors or TFTs) to readout an image formed in either a phosphor or photoconductor layer. Extension of this concept to real time imaging--fluoroscopy--has had problems due to the very low noise required. A new design strategy for fluoroscopic active matrix flat panel detectors has therefore been investigated theoretically. In this approach, the active matrix has integrated thin-film amplifiers and readout electronics at each pixel and is called the amplified pixel detector array (APDA). Each amplified pixel consists of three thin-film transistors: an amplifier, a readout, and a reset TFT. The performance of the APDA approach compared to the conventional active matrix was investigated for two semiconductors commonly used to construct active matrix arrays--hydrogenated amorphous silicon and polycrystalline silicon. The results showed that with amplification close to the pixel, the noise from the external charge preamplifiers becomes insignificant. The thermal and flicker noise of the readout and the amplifying TFTs at the pixel become the dominant sources of noise. The magnitude of these noise sources is strongly dependent on the TFT geometry and its fabrication process. Both of these could be optimized to make the APDA active matrix operate at lower noise levels than is possible with the conventional approach. However, the APDA cannot be made to operate ideally (i.e., have noise limited only by the amount of radiation used) at the lowest exposure rate required in medical fluoroscopy.
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
Gaussian memory in kinematic matrix theory for self-propellers.
Nourhani, Amir; Crespi, Vincent H; Lammert, Paul E
2014-12-01
We extend the kinematic matrix ("kinematrix") formalism [Phys. Rev. E 89, 062304 (2014)], which via simple matrix algebra accesses ensemble properties of self-propellers influenced by uncorrelated noise, to treat Gaussian correlated noises. This extension brings into reach many real-world biological and biomimetic self-propellers for which inertia is significant. Applying the formalism, we analyze in detail ensemble behaviors of a 2D self-propeller with velocity fluctuations and orientation evolution driven by an Ornstein-Uhlenbeck process. On the basis of exact results, a variety of dynamical regimes determined by the inertial, speed-fluctuation, orientational diffusion, and emergent disorientation time scales are delineated and discussed.
Adapting Covariance Propagation to Account for the Presence of Modeled and Unmodeled Maneuvers
NASA Technical Reports Server (NTRS)
Schiff, Conrad
2006-01-01
This paper explores techniques that can be used to adapt the standard linearized propagation of an orbital covariance matrix to the case where there is a maneuver and an associated execution uncertainty. A Monte Carlo technique is used to construct a final orbital covariance matrix for a 'prop-burn-prop' process that takes into account initial state uncertainty and execution uncertainties in the maneuver magnitude. This final orbital covariance matrix is regarded as 'truth' and comparisons are made with three methods using modified linearized covariance propagation. The first method accounts for the maneuver by modeling its nominal effect within the state transition matrix but excludes the execution uncertainty by omitting a process noise matrix from the computation. The second method does not model the maneuver but includes a process noise matrix to account for the uncertainty in its magnitude. The third method, which is essentially a hybrid of the first two, includes the nominal portion of the maneuver via the state transition matrix and uses a process noise matrix to account for the magnitude uncertainty. The first method is unable to produce the final orbit covariance except in the case of zero maneuver uncertainty. The second method yields good accuracy for the final covariance matrix but fails to model the final orbital state accurately. Agreement between the simulated covariance data produced by this method and the Monte Carlo truth data fell within 0.5-2.5 percent over a range of maneuver sizes that span two orders of magnitude (0.1-20 m/s). The third method, which yields a combination of good accuracy in the computation of the final covariance matrix and correct accounting for the presence of the maneuver in the nominal orbit, is the best method for applications involving the computation of times of closest approach and the corresponding probability of collision, PC. However, applications for the two other methods exist and are briefly discussed. Although the process model ("prop-burn-prop") that was studied is very simple - point-mass gravitational effects due to the Earth combined with an impulsive delta-V in the velocity direction for the maneuver - generalizations to more complex scenarios, including high fidelity force models, finite duration maneuvers, and maneuver pointing errors, are straightforward and are discussed in the conclusion.
Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum
NASA Astrophysics Data System (ADS)
Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.
2017-09-01
Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.
Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test
NASA Astrophysics Data System (ADS)
Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2018-01-01
The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.
Removal of Stationary Sinusoidal Noise from Random Vibration Signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian; Cap, Jerome S.
In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less
Removing Background Noise with Phased Array Signal Processing
NASA Technical Reports Server (NTRS)
Podboy, Gary; Stephens, David
2015-01-01
Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems.
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-07-26
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches.
Fuzzy Adaptive Cubature Kalman Filter for Integrated Navigation Systems
Tseng, Chien-Hao; Lin, Sheng-Fuu; Jwo, Dah-Jing
2016-01-01
This paper presents a sensor fusion method based on the combination of cubature Kalman filter (CKF) and fuzzy logic adaptive system (FLAS) for the integrated navigation systems, such as the GPS/INS (Global Positioning System/inertial navigation system) integration. The third-degree spherical-radial cubature rule applied in the CKF has been employed to avoid the numerically instability in the system model. In processing navigation integration, the performance of nonlinear filter based estimation of the position and velocity states may severely degrade caused by modeling errors due to dynamics uncertainties of the vehicle. In order to resolve the shortcoming for selecting the process noise covariance through personal experience or numerical simulation, a scheme called the fuzzy adaptive cubature Kalman filter (FACKF) is presented by introducing the FLAS to adjust the weighting factor of the process noise covariance matrix. The FLAS is incorporated into the CKF framework as a mechanism for timely implementing the tuning of process noise covariance matrix based on the information of degree of divergence (DOD) parameter. The proposed FACKF algorithm shows promising accuracy improvement as compared to the extended Kalman filter (EKF), unscented Kalman filter (UKF), and CKF approaches. PMID:27472336
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
A Continuous Square Root in Formation Filter-Swoother with Discrete Data Update
NASA Technical Reports Server (NTRS)
Miller, J. K.
1994-01-01
A differential equation for the square root information matrix is derived and adapted to the problems of filtering and smoothing. The resulting continuous square root information filter (SRIF) performs the mapping of state and process noise by numerical integration of the SRIF matrix and admits data via a discrete least square update.
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
NASA Astrophysics Data System (ADS)
Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui
2018-07-01
During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.
Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra
2017-01-01
The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.
NASA Astrophysics Data System (ADS)
Ikhwansyah; Mulia; Gunawan, S.; Lubis, R. D. W.
2018-02-01
The objective is to get the characteristics of noise reduction, noise reduction level, variety of measurement spaces, and knowing the process in making acoustic material of natural fiber becomes noise reduction on a car hood. The process of making noise reduction material used casting method and pressed by using molded press. The composition of noise reduction material consist of 50% roystonea regia by 32 mesh and 50% combined by gypsum and polyurethane. The result shows that the average result of noise reduction at X1- side is 5,7% and X2- side is 3,9%, X1+ side is 0,9% and X2+ side is 6,2%, Z1- side is 8,9% and Z2- side is 10,1%, Z1+ side is 9,7% and Z2+ side is 10,01%. The main conclusion of the study shows that a noise reduction which made of roystonea regia with 32 mesh mixed by matrix of polyurethane and gypsum is appropriate for noise reduction on car hood.
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H where V ~ WH. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this paper, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse Gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness-of-fit on data. Our methods are demonstrated using experimental data from electromyography studies as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise. PMID:24684448
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Deciding Optimal Noise Monitoring Sites with Matrix Gray Absolute Relation Degree Theory
NASA Astrophysics Data System (ADS)
Gao, Zhihua; Li, Yadan; Zhao, Limin; Wang, Shuangwei
2015-08-01
Noise maps are applied to assess noise level in cities all around the world. There are mainly two ways of producing noise maps: one way is producing noise maps through theoretical simulations with the surrounding conditions, such as traffic flow, building distribution, etc.; the other one is calculating noise level with actual measurement data from noise monitors. Currently literature mainly focuses on considering more factors that affect sound traveling during theoretical simulations and interpolation methods in producing noise maps based on measurements of noise. Although many factors were considered during simulation, noise maps have to be calibrated by actual noise measurements. Therefore, the way of obtaining noise data is significant to both producing and calibrating a noise map. However, there is little literature mentioned about rules of deciding the right monitoring sites when placed the specified number of noise sensors and given the deviation of a noise map produced with data from them. In this work, by utilizing matrix Gray Absolute Relation Degree Theory, we calculated the relation degrees between the most precise noise surface and those interpolated with different combinations of noise data with specified number. We found that surfaces plotted with different combinations of noise data produced different relation degrees with the most precise one. Then we decided the least significant one among the total and calculated the corresponding deviation when it was excluded in making a noise surface. Processing the left noise data in the same way, we found out the least significant datum among the left data one by one. With this method, we optimized the noise sensor’s distribution in an area about 2km2. And we also calculated the bias of surfaces with the least significant data removed. Our practice provides an optimistic solution to the situation faced by most governments that there is limited financial budget available for noise monitoring, especially in the undeveloped regions.
Addressable inverter matrix for process and device characterization
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Sayah, H. R.
1985-01-01
The addressable inverter matrix consists of 222 inverters each accessible with the aid of a shift register. The structure has proven useful in characterizing the variability of inverter transfer curves and in diagnosing processing faults. For good 3-micron CMOS bulk inverters investigated, the percent standard deviation of the inverter threshold voltage was less than one percent and the inverter gain (the slope of the inverter transfer curve at the inverter threshold vltage) was less than 3 percent. The average noise margin for the inverters was near 2 volts for a power supply voltage of 5 volts. The specific faults studied included undersize pull-down transistor widths and various open contacts in the matrix.
Addressable inverter matrix for process and device characterization
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Sayah, H. R.
1985-01-01
The addressable inverter matrix consists of 222 inverters each accessible with the aid of a shift register. The structure has proven useful in characterizing the variability of inverter transfer curves and in diagnosing processing faults. For good 3-micron CMOS bulk inverters investigated in this study, the percent standard deviation of the inverter threshold voltage was less than one percent and the inverter gain (the slope of the inverter transfer curve at the inverter threshold voltage) was less than 3 percent. The average noise margin for the inverters was near 2 volts for a power supply voltage of 5 volts. The specific faults studied included undersize pull-down transistor widths and various open contacts in the matrix.
Gabelmann, Jeffrey M.; Kattner, J. Stephen; Houston, Robert A.
2006-12-19
This invention is an ultra-low frequency electromagnetic telemetry receiver which fuses multiple input receive sources to synthesize a decodable message packet from a noise corrupted telemetry message string. Each block of telemetry data to be sent to the surface receiver from a borehole tool is digitally encoded into a data packet prior to transmission. The data packet is modulated onto the ULF EM carrier wave and transmitted from the borehole to the surface and then are simultaneously detected by multiple receive sensors disbursed within the rig environment. The receive sensors include, but are not limited to, electric field and magnetic field sensors. The spacing of the surface receive elements is such that noise generators are unequally coupled to each receive element due to proximity and/or noise generator type (i.e. electric or magnetic field generators). The receiver utilizes a suite of decision metrics to reconstruct the original, non noise-corrupted data packet from the observation matrix via the estimation of individual data frames. The receiver will continue this estimation process until: 1) the message validates, or 2) a preset "confidence threshold" is reached whereby frames within the observation matrix are no longer "trusted".
Pixel electronic noise as a function of position in an active matrix flat panel imaging array
NASA Astrophysics Data System (ADS)
Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.
2010-04-01
We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.
Xia, Huijun; Yang, Kunde; Ma, Yuanliang; Wang, Yong; Liu, Yaxiong
2017-01-01
Generally, many beamforming methods are derived under the assumption of white noise. In practice, the actual underwater ambient noise is complex. As a result, the noise removal capacity of the beamforming method may be deteriorated considerably. Furthermore, in underwater environment with extremely low signal-to-noise ratio (SNR), the performances of the beamforming method may be deteriorated. To tackle these problems, a noise removal method for uniform circular array (UCA) is proposed to remove the received noise and improve the SNR in complex noise environments with low SNR. First, the symmetrical noise sources are defined and the spatial correlation of the symmetrical noise sources is calculated. Then, based on the preceding results, the noise covariance matrix is decomposed into symmetrical and asymmetrical components. Analysis indicates that the symmetrical component only affect the real part of the noise covariance matrix. Consequently, the delay-and-sum (DAS) beamforming is performed by using the imaginary part of the covariance matrix to remove the symmetrical component. However, the noise removal method causes two problems. First, the proposed method produces a false target. Second, the proposed method would seriously suppress the signal when it is located in some directions. To solve the first problem, two methods to reconstruct the signal covariance matrix are presented: based on the estimation of signal variance and based on the constrained optimization algorithm. To solve the second problem, we can design the array configuration and select the suitable working frequency. Theoretical analysis and experimental results are included to demonstrate that the proposed methods are particularly effective in complex noise environments with low SNR. The proposed method can be extended to any array. PMID:28598386
Constructing 1/omegaalpha noise from reversible Markov chains.
Erland, Sveinung; Greenwood, Priscilla E
2007-09-01
This paper gives sufficient conditions for the output of 1/omegaalpha noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/omegaalpha condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/omega noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/omegaalpha noise which also has a long memory.
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Robust adaptive multichannel SAR processing based on covariance matrix reconstruction
NASA Astrophysics Data System (ADS)
Tan, Zhen-ya; He, Feng
2018-04-01
With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.
Zhang, Zhi-Hai; Gao, Ling-Xiao; Guo, Yuan-Jun; Wang, Wei; Mo, Xiang-Xia
2012-12-01
The template selection is essential in the application of digital micromirror spectrometer. The best theoretical coding H-matrix is not widely used due to acyclic, complex coding and difficult achievement. The noise ratio of best practical S-matrix for improvement is slightly inferior to matrix H. So we designed a new type complementary S-matrix. Through studying its noise improvement theory, the algorithm is proved to have the advantages of both H-matrix and S-matrix. The experiments proved that the SNR can be increased 2.05 times than S-template.
NASA Astrophysics Data System (ADS)
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-01
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Yu, Qian; Fang, Debin; Zhang, Xiaoling; Jin, Chen; Ren, Qiyu
2016-06-27
Stochasticity plays an important role in the evolutionary dynamic of cyclic dominance within a finite population. To investigate the stochastic evolution process of the behaviour of bounded rational individuals, we model the Rock-Scissors-Paper (RSP) game as a finite, state dependent Quasi Birth and Death (QBD) process. We assume that bounded rational players can adjust their strategies by imitating the successful strategy according to the payoffs of the last round of the game, and then analyse the limiting distribution of the QBD process for the game stochastic evolutionary dynamic. The numerical experiments results are exhibited as pseudo colour ternary heat maps. Comparisons of these diagrams shows that the convergence property of long run equilibrium of the RSP game in populations depends on population size and the parameter of the payoff matrix and noise factor. The long run equilibrium is asymptotically stable, neutrally stable and unstable respectively according to the normalised parameters in the payoff matrix. Moreover, the results show that the distribution probability becomes more concentrated with a larger population size. This indicates that increasing the population size also increases the convergence speed of the stochastic evolution process while simultaneously reducing the influence of the noise factor.
Hata, Akinori; Yanagawa, Masahiro; Honda, Osamu; Kikuchi, Noriko; Miyata, Tomo; Tsukagoshi, Shinsuke; Uranishi, Ayumi; Tomiyama, Noriyuki
2018-01-16
This study aimed to assess the effect of matrix size on the spatial resolution and image quality of ultra-high-resolution computed tomography (U-HRCT). Slit phantoms and 11 cadaveric lungs were scanned on U-HRCT. Slit phantom scans were reconstructed using a 20-mm field of view (FOV) with 1024 matrix size and a 320-mm FOV with 512, 1024, and 2048 matrix sizes. Cadaveric lung scans were reconstructed using 512, 1024, and 2048 matrix sizes. Three observers subjectively scored the images on a three-point scale (1 = worst, 3 = best), in terms of overall image quality, noise, streak artifact, vessel, bronchi, and image findings. The median score of the three observers was evaluated by Wilcoxon signed-rank test with Bonferroni correction. Noise was measured quantitatively and evaluated with the Tukey test. A P value of <.05 was considered significant. The maximum spatial resolution was 0.14 mm; among the 320-mm FOV images, the 2048 matrix had the highest resolution and was significantly better than the 1024 matrix in terms of overall quality, solid nodule, ground-glass opacity, emphysema, intralobular reticulation, honeycombing, and clarity of vessels (P < .05). Both the 2048 and 1024 matrices performed significantly better than the 512 matrix (P < .001), except for noise and streak artifact. The visual and quantitative noise decreased significantly in the order of 512, 1024, and 2048 (P < .001). In U-HRCT scans, a large matrix size maintained the spatial resolution and improved the image quality and assessment of lung diseases, despite an increase in image noise, when compared to a 512 matrix size. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Constructing 1/ωα noise from reversible Markov chains
NASA Astrophysics Data System (ADS)
Erland, Sveinung; Greenwood, Priscilla E.
2007-09-01
This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.
[The speech audiometry using the matrix sentence test].
Boboshko, M Yu; Zhilinskaia, E V; Warzybok, A; Maltseva, N V; Zokoll, M; Kollmeier, B
The matrix sentence test in which the five-word semantically unpredictable sentences presented under the background noise conditions are used as the speech material was designed and validated for many languages. The objective of the present study was to evaluate the Russian version of the matrix sentence test (RuMatrix test) in the listeners of different ages with normal hearing. At the first stage of the study, 35 listeners at the age from 18 to 33 year were examined. The results of the estimation of the training effect dictated the necessity of conducting two training tracks before carrying out the RuMatrix test proper. The signal-to-noise ratio at which 50% speech recognition (SRT50) was obtained was found to be -8.8±0.8 dB SNR. A significant effect of exposure to the background noise was demonstrated: the noise level of 80 and 75 Db SPL led to a considerably lower intelligibility than the noise levels in the range from 45 to 70 dB SPL; in the subsequent studies, the noise level of 65 dB SPL was used. The high test-retest reliability of the RuMatrix test was proved. At the second stage of the study, 20 young (20-40 year old) listeners and 20 aged (62-74 year old) ones were examined. The mean SRT50 in the aged patients was found to be -6.9±1.1 dB SNR which was much worse than the mean STR50 in the young subjects (-8.7±0.9 dB SNR). It is concluded that, bearing in mind the excellent comparability of the results of the RUMat rix test across different languages, it can be used as a universal tool in international research projects.
High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction
NASA Astrophysics Data System (ADS)
Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming
2017-12-01
The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.
NASA Astrophysics Data System (ADS)
Wutsqa, D. U.; Marwah, M.
2017-06-01
In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
Langbein, John O.
2017-01-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors
NASA Astrophysics Data System (ADS)
Langbein, John
2017-08-01
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.
Jia, Hongjun; Martinez, Aleix M
2009-05-01
The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.
Noise and spectroscopic performance of DEPMOSFET matrix devices for XEUS
NASA Astrophysics Data System (ADS)
Treis, J.; Fischer, P.; Hälker, O.; Herrmann, S.; Kohrs, R.; Krüger, H.; Lechner, P.; Lutz, G.; Peric, I.; Porro, M.; Richter, R. H.; Strüder, L.; Trimpl, M.; Wermes, N.; Wölfel, S.
2005-08-01
DEPMOSFET based Active Pixel Sensor (APS) matrix devices, originally developed to cope with the challenging requirements of the XEUS Wide Field Imager, have proven to be a promising new imager concept for a variety of future X-ray imaging and spectroscopy missions like Simbol-X. The devices combine excellent energy resolution, high speed readout and low power consumption with the attractive feature of random accessibility of pixels. A production of sensor prototypes with 64 x 64 pixels with a size of 75 μm x 75 μm each has recently been finished at the MPI semiconductor laboratory in Munich. The devices are built for row-wise readout and require dedicated control and signal processing electronics of the CAMEX type, which is integrated together with the sensor onto a readout hybrid. A number of hybrids incorporating the most promising sensor design variants has been built, and their performance has been studied in detail. A spectroscopic resolution of 131 eV has been measured, the readout noise is as low as 3.5 e- ENC. Here, the dependence of readout noise and spectroscopic resolution on the device temperature is presented.
A matrix equation solution by an optimization technique
NASA Technical Reports Server (NTRS)
Johnson, M. J.; Mittra, R.
1972-01-01
The computer solution of matrix equations is often difficult to accomplish due to an ill-conditioned matrix or high noise levels. Two methods of solution are compared for matrices of various degrees of ill-conditioning and for various noise levels in the right hand side vector. One method employs the usual Gaussian elimination. The other solves the equation by an optimization technique and employs a function minimization subroutine.
Acoustooptic linear algebra processors - Architectures, algorithms, and applications
NASA Technical Reports Server (NTRS)
Casasent, D.
1984-01-01
Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1995-01-01
Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.
Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal
Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan
2014-01-01
This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
Polat, Zahra; Bulut, Erdoğan; Ataş, Ahmet
2016-09-01
Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Cross-sectional study. The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects' speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. The results of the study show a correlation between Pure Tone Average (PTA) values of the subjects and Matrix test Speech Reception Threshold (SRT) values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform intelligibility test with the Matrix test using a material that CI users experience in their everyday life and it is possible to assess their difficulty in speech discrimination in noisy conditions they have to cope with.
Hu, Hongmei; Krasoulis, Agamemnon; Lutman, Mark; Bleeck, Stefan
2013-01-01
Cochlear implants (CIS) require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF) was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLAB®, SIMULINK® and the xPC Target™ environment, the input/outupt (I/O) boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL) simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity. PMID:24129021
Hu, Hongmei; Krasoulis, Agamemnon; Lutman, Mark; Bleeck, Stefan
2013-10-14
Cochlear implants (CIs) require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF) was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLAB®, SIMULINK® and the xPC Target™ environment, the input/outupt (I/O) boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL) simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity.
Qubit dephasing due to low-frequency noise.
NASA Astrophysics Data System (ADS)
Sverdlov, Victor; Rabenstein, Kristian; Averin, Dmitri
2004-03-01
We have numerically investigated the effects of the classical low-frequency noise on the qubit dynamics beyond the standard lowest-order perturbation theory in coupling. Noise is generated as a random process with a correlation function characterized by two parameters, the amplitude v0 and the cut-off frequency 2π/τ. Time evolution of the density matrix was averaged over up to 10^7 noise realizations. Contrary to the relaxation time T_1, which for v_0<ω, where ω is the qubit oscillation frequency, is always given correctly by the ``golden-rule'' expression, the dephasing time deviates from the perturbation-theory result, when (v_0/ω)^2(ωτ) ≥1. In this regime, even for unbiased qubit for which the pure dephasing vanishes in perturbation theory, the dephasing is much larger than it's perturbation-theory value 1/(2 T_1).
NASA Astrophysics Data System (ADS)
Abdullayev, B. I.; Gulmaliyev, N. I.; Majidova, S. O.; Mikayilov, Kh. M.; Rustamov, B. N.
2009-12-01
Basic technical characteristics of CCD matrix U-47 made by the Apogee Alta Instruments Inc. are provided. Short description and features of various noises introduced by optical system and CCD camera are presented. The technique of getting calibration frames: bias, dark, flat field and main stages of processing of results CCD photometry are described.
Deghosting based on the transmission matrix method
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong
2017-12-01
As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.
The estimation error covariance matrix for the ideal state reconstructor with measurement noise
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.
Evaluation of a Variable-Impedance Ceramic Matrix Composite Acoustic Liner
NASA Technical Reports Server (NTRS)
Jones, M. G.; Watson, W. R.; Nark, D. M.; Howerton, B. M.
2014-01-01
As a result of significant progress in the reduction of fan and jet noise, there is growing concern regarding core noise. One method for achieving core noise reduction is via the use of acoustic liners. However, these liners must be constructed with materials suitable for high temperature environments and should be designed for optimum absorption of the broadband core noise spectrum. This paper presents results of tests conducted in the NASA Langley Liner Technology Facility to evaluate a variable-impedance ceramic matrix composite acoustic liner that offers the potential to achieve each of these goals. One concern is the porosity of the ceramic matrix composite material, and whether this might affect the predictability of liners constructed with this material. Comparisons between two variable-depth liners, one constructed with ceramic matrix composite material and the other constructed via stereolithography, are used to demonstrate this material porosity is not a concern. Also, some interesting observations are noted regarding the orientation of variable-depth liners. Finally, two propagation codes are validated via comparisons of predicted and measured acoustic pressure profiles for a variable-depth liner.
Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y
1997-09-01
This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.
Preconditioner-free Wiener filtering with a dense noise matrix
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.
2018-05-01
This work extends the Elsner & Wandelt (2013) iterative method for efficient, preconditioner-free Wiener filtering to cases in which the noise covariance matrix is dense, but can be decomposed into a sum whose parts are sparse in convenient bases. The new method, which uses multiple messenger fields, reproduces Wiener-filter solutions for test problems, and we apply it to a case beyond the reach of the Elsner & Wandelt (2013) method. We compute the Wiener-filter solution for a simulated Cosmic Microwave Background (CMB) map that contains spatially varying, uncorrelated noise, isotropic 1/f noise, and large-scale horizontal stripes (like those caused by atmospheric noise). We discuss simple extensions that can filter contaminated modes or inverse-noise-filter the data. These techniques help to address complications in the noise properties of maps from current and future generations of ground-based Microwave Background experiments, like Advanced ACTPol, Simons Observatory, and CMB-S4.
NASA Astrophysics Data System (ADS)
Wu, Lifu; Qiu, Xiaojun; Guo, Yecai
2018-06-01
To tune the noise amplification in the feedback system caused by the waterbed effect effectively, an adaptive algorithm is proposed in this paper by replacing the scalar leaky factor of the leaky FxLMS algorithm with a real symmetric Toeplitz matrix. The elements in the matrix are calculated explicitly according to the noise amplification constraints, which are defined based on a simple but efficient method. Simulations in an ANC headphone application demonstrate that the proposed algorithm can adjust the frequency band of noise amplification more effectively than the FxLMS algorithm and the leaky FxLMS algorithm.
Benefit of the UltraZoom beamforming technology in noise in cochlear implant users.
Mosnier, Isabelle; Mathias, Nathalie; Flament, Jonathan; Amar, Dorith; Liagre-Callies, Amelie; Borel, Stephanie; Ambert-Dahan, Emmanuèle; Sterkers, Olivier; Bernardeschi, Daniele
2017-09-01
The objectives of the study were to demonstrate the audiological and subjective benefits of the adaptive UltraZoom beamforming technology available in the Naída CI Q70 sound processor, in cochlear-implanted adults upgraded from a previous generation sound processor. Thirty-four adults aged between 21 and 89 years (mean 53 ± 19) were prospectively included. Nine subjects were unilaterally implanted, 11 bilaterally and 14 were bimodal users. The mean duration of cochlear implant use was 7 years (range 5-15 years). Subjects were tested in quiet with monosyllabic words and in noise with the adaptive French Matrix test in the best-aided conditions. The test setup contained a signal source in front of the subject and three noise sources at +/-90° and 180°. The noise was presented at a fixed level of 65 dB SPL and the level of speech signal was varied to obtain the speech reception threshold (SRT). During the upgrade visit, subjects were tested with the Harmony and with the Naída CI sound processors in omnidirectional microphone configuration. After a take-home phase of 2 months, tests were repeated with the Naída CI processor with and without UltraZoom. Subjective assessment of the sound quality in daily environments was recorded using the APHAB questionnaire. No difference in performance was observed in quiet between the two processors. The Matrix test in noise was possible in the 21 subjects with the better performance. No difference was observed between the two processors for performance in noise when using the omnidirectional microphone. At the follow-up session, the median SRT with the Naída CI processor with UltraZoom was -4 dB compared to -0.45 dB without UltraZoom. The use of UltraZoom improved the median SRT by 3.6 dB (p < 0.0001, Wilcoxon paired test). When looking at the APHAB outcome, improvement was observed for speech understanding in noisy environments (p < 0.01) and in aversive situations (p < 0.05) in the group of 21 subjects who were able to perform the Matrix test in noise and for speech understanding in noise (p < 0.05) in the group of 13 subjects with the poorest performance, who were not able to perform the Matrix test in noise. The use of UltraZoom beamforming technology, available on the new sound processor Naída CI, improves speech performance in difficult and realistic noisy conditions when the cochlear implant user needs to focus on the person speaking at the front. Using the APHAB questionnaire, a subjective benefit for listening in background noise was also observed in subjects with good performance as well as in those with poor performance. This study highlighted the importance of upgrading CI recipients to new technology and to include assessment in noise and subjective feedback evaluation as part of the process.
Exploiting the Spatio-Temporal Coherence of Ocean Ambient Noise for Passive Tomography
2012-09-30
ˆ kfCij and corresponds to the entry (i,j) of cross-covariance matrix for the selected horizontal triangular array, denoted );( ˆ kfC at the...diagonal elements );( ˆ kfCii (i=1..3) of the matrix );( ˆ kfC were set to zero to mitigate the bias due to electronic noise and the large
Automatic brain MR image denoising based on texture feature-based artificial neural networks.
Chang, Yu-Ning; Chang, Herng-Hua
2015-01-01
Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
Plantet, C; Meimon, S; Conan, J-M; Fusco, T
2015-11-02
Exoplanet direct imaging with large ground based telescopes requires eXtreme Adaptive Optics that couples high-order adaptive optics and coronagraphy. A key element of such systems is the high-order wavefront sensor. We study here several high-order wavefront sensing approaches, and more precisely compare their sensitivity to noise. Three techniques are considered: the classical Shack-Hartmann sensor, the pyramid sensor and the recently proposed LIFTed Shack-Hartmann sensor. They are compared in a unified framework based on precise diffractive models and on the Fisher information matrix, which conveys the information present in the data whatever the estimation method. The diagonal elements of the inverse of the Fisher information matrix, which we use as a figure of merit, are similar to noise propagation coefficients. With these diagonal elements, so called "Fisher coefficients", we show that the LIFTed Shack-Hartmann and pyramid sensors outperform the classical Shack-Hartmann sensor. In photon noise regime, the LIFTed Shack-Hartmann and modulated pyramid sensors obtain a similar overall noise propagation. The LIFTed Shack-Hartmann sensor however provides attractive noise properties on high orders.
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
2016-01-01
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Insensitivity of visual short-term memory to irrelevant visual information.
Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud
2002-07-01
Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
[Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].
Long, Junbo; Wang, Haibin; Zha, Daifeng
2015-04-01
The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.
Diffusion MRI noise mapping using random matrix theory
Veraart, Jelle; Fieremans, Els; Novikov, Dmitry S.
2016-01-01
Purpose To estimate the spatially varying noise map using a redundant magnitude MR series. Methods We exploit redundancy in non-Gaussian multi-directional diffusion MRI data by identifying its noise-only principal components, based on the theory of noisy covariance matrices. The bulk of PCA eigenvalues, arising due to noise, is described by the universal Marchenko-Pastur distribution, parameterized by the noise level. This allows us to estimate noise level in a local neighborhood based on the singular value decomposition of a matrix combining neighborhood voxels and diffusion directions. Results We present a model-independent local noise mapping method capable of estimating noise level down to about 1% error. In contrast to current state-of-the art techniques, the resultant noise maps do not show artifactual anatomical features that often reflect physiological noise, the presence of sharp edges, or a lack of adequate a priori knowledge of the expected form of MR signal. Conclusions Simulations and experiments show that typical diffusion MRI data exhibit sufficient redundancy that enables accurate, precise, and robust estimation of the local noise level by interpreting the PCA eigenspectrum in terms of the Marchenko-Pastur distribution. PMID:26599599
Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform
Tang, Guiji; Tian, Tian; Zhou, Chong
2018-01-01
When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013
Total variation optimization for imaging through turbid media with transmission matrix
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei; Liu, Jietao; Zhang, Jianqi
2016-12-01
With the transmission matrix (TM) of the whole optical system measured, the image of the object behind a turbid medium can be recovered from its speckle field by means of an image reconstruction algorithm. Instead of Tikhonov regularization algorithm (TRA), the total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) is introduced to recover object images. As a total variation (TV)-based approach, TVAL3 allows to effectively damp more noise and preserve more edges compared with TRA, thus providing more outstanding image quality. Different levels of detector noise and TM-measurement noise are successively added to analyze the antinoise performance of these two algorithms. Simulation results show that TVAL3 is able to recover more details and suppress more noise than TRA under different noise levels, thus providing much more excellent image quality. Furthermore, whether it be detector noise or TM-measurement noise, the reconstruction images obtained by TVAL3 at SNR=15 dB are far superior to those by TRA at SNR=50 dB.
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Hidden Quantum Processes, Quantum Ion Channels, and 1/ fθ-Type Noise.
Paris, Alan; Vosoughi, Azadeh; Berman, Stephen A; Atia, George
2018-07-01
In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from [Formula: see text] and [Formula: see text] channel kinetics, in order to identify the source of [Formula: see text]-type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise depends solely on the eigenvalues (time constants) of the kinetic matrix but that the Lorentzian weighting coefficients depend entirely on the eigenvectors of this matrix. We then show that there are rotations of the kinetic eigenvectors that send any initial weights to any target weights without altering the time constants. In particular, we show there are target weights for which the resulting Lorenztian noise has an approximately [Formula: see text]-type spectrum. We justify these kinetic rotations by introducing a quantum mechanical formulation of membrane stochastics, called hidden quantum activated-measurement models, and prove that these quantum models are probabilistically indistinguishable from the classical hidden Markov models typically used for ion channel stochastics. The quantum dividend obtained by replacing classical with quantum membranes is that rotations of the Lorentzian weights become simple readjustments of the quantum state without any change to the laboratory-determined kinetic and conductance parameters. Moreover, the quantum formalism allows us to model the activation energy of a membrane, and we show that maximizing entropy under constrained activation energy yields the previous [Formula: see text]-type Lorentzian weights, in which the spectral exponent [Formula: see text] is a Lagrange multiplier for the energy constraint. Thus, we provide a plausible neurophysical mechanism by which channel and membrane kinetics can give rise to [Formula: see text]-type noise (something that has been occasionally denied in the literature), as well as a realistic and experimentally testable explanation for the numerical values of the spectral exponents. We also discuss applications of quantum membranes beyond [Formula: see text]-type -noise, including applications to animal models and possible impact on quantum foundations.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Hochmuth, Sabine; Kollmeier, Birger; Brand, Thomas; Jürgens, Tim
2015-01-01
To compare speech reception thresholds (SRTs) in noise using matrix sentence tests in four languages: German, Spanish, Russian, Polish. The four tests were composed of equivalent five-word sentences and were all designed and optimized using the same principles. Six stationary speech-shaped noises and three non-stationary noises were used as maskers. Forty native listeners with normal hearing: 10 for each language. SRTs were about 3 dB higher for the German and Spanish tests than for the Russian and Polish tests when stationary noise was used that matched the long-term frequency spectrum of the respective speech test materials. This general SRT difference was also observed for the other stationary noises. The within-test variability across noise conditions differed between languages. About 56% of the observed variance was predicted by the speech intelligibility index. The observed SRT benefit in fluctuating noise was similar for all tests, with a slightly smaller benefit for the Spanish test. Of the stationary noises employed, noise with the same spectrum as the speech yielded the best masking. SRT differences across languages and noises could be attributed in part to spectral differences. These findings provide the feasibility and limits of comparing audiological results across languages.
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion
NASA Technical Reports Server (NTRS)
Gupta, Inder J.
1986-01-01
The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.
NASA Astrophysics Data System (ADS)
Mustać, Marija; Tkalčić, Hrvoje; Burky, Alexander L.
2018-01-01
Moment tensor (MT) inversion studies of events in The Geysers geothermal field mostly focused on microseismicity and found a large number of earthquakes with significant non-double-couple (non-DC) seismic radiation. Here we concentrate on the largest events in the area in recent years using a hierarchical Bayesian MT inversion. Initially, we show that the non-DC components of the MT can be reliably retrieved using regional waveform data from a small number of stations. Subsequently, we present results for a number of events and show that accounting for noise correlations can lead to retrieval of a lower isotropic (ISO) component and significantly different focal mechanisms. We compute the Bayesian evidence to compare solutions obtained with different assumptions of the noise covariance matrix. Although a diagonal covariance matrix produces a better waveform fit, inversions that account for noise correlations via an empirically estimated noise covariance matrix account for interdependences of data errors and are preferred from a Bayesian point of view. This implies that improper treatment of data noise in waveform inversions can result in fitting the noise and misinterpreting the non-DC components. Finally, one of the analyzed events is characterized as predominantly DC, while the others still have significant non-DC components, probably as a result of crack opening, which is a reasonable hypothesis for The Geysers geothermal field geological setting.
Critical examination of the uniformity requirements for single-photon emission computed tomography.
O'Connor, M K; Vermeersch, C
1991-01-01
It is generally recognized that single-photon emission computed tomography (SPECT) imposes very stringent requirements on gamma camera uniformity to prevent the occurrence of ring artifacts. The purpose of this study was to examine the relationship between nonuniformities in the planar data and the magnitude of the consequential ring artifacts in the transaxial data, and how the perception of these artifacts is influenced by factors such as reconstruction matrix size, reconstruction filter, and image noise. The study indicates that the relationship between ring artifact magnitude and image noise is essentially independent of the acquisition or reconstruction matrix sizes, but is strongly dependent upon the type of smoothing filter applied during the reconstruction process. Furthermore, the degree to which a ring artifact can be perceived above image noise is dependent on the size and location of the nonuniformity in the planar data, with small nonuniformities (1-2 pixels wide) close to the center of rotation being less perceptible than those further out (8-20 pixels). Small defects or nonuniformities close to the center of rotation are thought to cause the greatest potential corruption to tomographic data. The study indicates that such may not be the case. Hence the uniformity requirements for SPECT may be less demanding than was previously thought.
Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.
Yang, Yingdong; Mao, Xuchu; Tian, Weifeng
2016-06-08
Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.
Performance Analysis of ICA in Sensor Array
Cai, Xin; Wang, Xiang; Huang, Zhitao; Wang, Fenghua
2016-01-01
As the best-known scheme in the field of Blind Source Separation (BSS), Independent Component Analysis (ICA) has been intensively used in various domains, including biomedical and acoustics applications, cooperative or non-cooperative communication, etc. While sensor arrays are involved in most of the applications, the influence on the performance of ICA of practical factors therein has not been sufficiently investigated yet. In this manuscript, the issue is researched by taking the typical antenna array as an illustrative example. Factors taken into consideration include the environment noise level, the properties of the array and that of the radiators. We analyze the analytic relationship between the noise variance, the source variance, the condition number of the mixing matrix and the optimal signal to interference-plus-noise ratio, as well as the relationship between the singularity of the mixing matrix and practical factors concerned. The situations where the mixing process turns (nearly) singular have been paid special attention to, since such circumstances are critical in applications. Results and conclusions obtained should be instructive when applying ICA algorithms on mixtures from sensor arrays. Moreover, an effective countermeasure against the cases of singular mixtures has been proposed, on the basis of previous analysis. Experiments validating the theoretical conclusions as well as the effectiveness of the proposed scheme have been included. PMID:27164100
Artifacts reduction in VIR/Dawn data.
Carrozzo, F G; Raponi, A; De Sanctis, M C; Ammannito, E; Giardino, M; D'Aversa, E; Fonte, S; Tosi, F
2016-12-01
Remote sensing images are generally affected by different types of noise that degrade the quality of the spectral data (i.e., stripes and spikes). Hyperspectral images returned by a Visible and InfraRed (VIR) spectrometer onboard the NASA Dawn mission exhibit residual systematic artifacts. VIR is an imaging spectrometer coupling high spectral and spatial resolutions in the visible and infrared spectral domain (0.25-5.0 μm). VIR data present one type of noise that may mask or distort real features (i.e., spikes and stripes), which may lead to misinterpretation of the surface composition. This paper presents a technique for the minimization of artifacts in VIR data that include a new instrument response function combining ground and in-flight radiometric measurements, correction of spectral spikes, odd-even band effects, systematic vertical stripes, high-frequency noise, and comparison with ground telescopic spectra of Vesta and Ceres. We developed a correction of artifacts in a two steps process: creation of the artifacts matrix and application of the same matrix to the VIR dataset. In the approach presented here, a polynomial function is used to fit the high frequency variations. After applying these corrections, the resulting spectra show improvements of the quality of the data. The new calibrated data enhance the significance of results from the spectral analysis of Vesta and Ceres.
Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.
Dillier, Norbert; Lai, Wai Kong
2015-06-11
The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.
Asymptotic Cramer-Rao bounds for Morlet wavelet filter bank transforms of FM signals
NASA Astrophysics Data System (ADS)
Scheper, Richard
2002-03-01
Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.
Correlated noise in the COBE DMR sky maps
NASA Technical Reports Server (NTRS)
Lineweaver, C. H.; Smoot, G. F.; Bennett, C. L.; Wright, E. L.; Tenorio, L.; Kogut, A.; Keegstra, P. B.; Hinshaw, G.; Banday, A. J.
1994-01-01
The Cosmic Background Explorer Satellite Differential Radiometer (COBE DMR) sky maps contain low-level correlated noise. We obtain estimates of the amplitude and pattern of the correlated noise from three techniques: angular averages of the covariance matrix, Monte Carlo simulations of two-point correlation functions and direct analysis of the DMR maps. The results from the three methods are mutually consistent. The noise covariance matrix of a DMR sky maps is diagonal to an accuracy of better than 1%. For a given sky pixel, the dominant noise covariance occure with the ring of pixels at an angular separation of 60 deg due to the 60 deg separation of the DMR horns. The mean covariance at 60 deg is 0.45%((sup +0.18)(sub -0.14)) of the mean variance. Additionally, the variance in a given pixel is 0.7% greater than would be expected from a single beam experiment with the same noise properties. Autocorrelation functions suffer from a approximately 1.5 sigma positive bias at 60 deg while cross-correlations have no bias. Published COBE DMR results are not significantly affected by correlated noise.
NASA Astrophysics Data System (ADS)
Rekapalli, Rajesh; Tiwari, R. K.; Sen, Mrinal K.; Vedanti, Nimisha
2017-05-01
Noises and data gaps complicate the seismic data processing and subsequently cause difficulties in the geological interpretation. We discuss a recent development and application of the Multi-channel Time Slice Singular Spectrum Analysis (MTSSSA) for 3D seismic data de-noising in time domain. In addition, L1 norm based simultaneous data gap filling of 3D seismic data using MTSSSA also discussed. We discriminated the noises from single individual time slices of 3D volumes by analyzing Eigen triplets of the trajectory matrix. We first tested the efficacy of the method on 3D synthetic seismic data contaminated with noise and then applied to the post stack seismic reflection data acquired from the Sleipner CO2 storage site (pre and post CO2 injection) from Norway. Our analysis suggests that the MTSSSA algorithm is efficient to enhance the S/N for better identification of amplitude anomalies along with simultaneous data gap filling. The bright spots identified in the de-noised data indicate upward migration of CO2 towards the top of the Utsira formation. The reflections identified applying MTSSSA to pre and post injection data correlate well with the geology of the Southern Viking Graben (SVG).
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Structured decomposition design of partial Mueller matrix polarimeters.
Alenin, Andrey S; Scott Tyo, J
2015-07-01
Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.
High-Frequency Response and Voltage Noise in Magnetic Nanocomposites
NASA Astrophysics Data System (ADS)
Buznikov, N. A.; Iakubov, I. T.; Rakhmanov, A. L.; Kugel, K. I.; Sboychakov, A. O.
We study the noise spectra and high-frequency permeability of inhomogeneous magnetic materials consisting of single-domain magnetic nanoparticles embedded into an insulating matrix. Possible mechanisms of 1/f voltage noise in phase-separated manganites is analyzed. The material is modelled by a system of small ferromagnetic metallic droplets (magnetic polarons or ferrons) in insulating antiferromagnetic or paramagnetic matrix. The electron transport is related to tunnelling of charge carriers between droplets. One of the sources of the 1/f noise in such a system stems from fluctuations of the number of droplets with extra electron. In the case of strong magnetic anisotropy, the 1/f noise can arise also due to the fluctuations of the magnetic moments of ferrons. The high frequency magnetic permeability of nanocomposite film with magnetic particles in insulating non-magnetic matrix is studied in detail. The case of strong magnetic dipole interaction and strong magnetic anisotropy of ferromagnetic granules is considered. The composite is modelled by a cubic regular array of ferromagnetic particles. The high-frequency permeability tensor components are found as a functions of frequency, temperature, ferromagnetic phase content, and magnetic anisotropy. The results demonstrate that magnetic dipole interaction leads to a shift of the resonance frequencies towards higher values, and nanocomposite film could have rather high value of magnetic permeability in the microwave range.
High-Frequency Response and Voltage Noise in Magnetic Nanocomposites
NASA Astrophysics Data System (ADS)
Buznikov, N. A.; Iakubov, I. T.; Rakhmanov, A. L.; Kugel, K. I.; Sboychakov, A. O.
2010-12-01
We study the noise spectra and high-frequency permeability of inhomogeneous magnetic materials consisting of single-domain magnetic nanoparticles embedded into an insulating matrix. Possible mechanisms of 1/f voltage noise in phase-separated manganites is analyzed. The material is modelled by a system of small ferromagnetic metallic droplets (magnetic polarons or ferrons) in insulating antiferromagnetic or paramagnetic matrix. The electron transport is related to tunnelling of charge carriers between droplets. One of the sources of the 1/f noise in such a system stems from fluctuations of the number of droplets with extra electron. In the case of strong magnetic anisotropy, the 1/f noise can arise also due to the fluctuations of the magnetic moments of ferrons. The high frequency magnetic permeability of nanocomposite film with magnetic particles in insulating non-magnetic matrix is studied in detail. The case of strong magnetic dipole interaction and strong magnetic anisotropy of ferromagnetic granules is considered. The composite is modelled by a cubic regular array of ferromagnetic particles. The high-frequency permeability tensor components are found as a functions of frequency, temperature, ferromagnetic phase content, and magnetic anisotropy. The results demonstrate that magnetic dipole interaction leads to a shift of the resonance frequencies towards higher values, and nanocomposite film could have rather high value of magnetic permeability in the microwave range.
Nonlinear Rheology in a Model Biological Tissue
NASA Astrophysics Data System (ADS)
Matoz-Fernandez, D. A.; Agoritsas, Elisabeth; Barrat, Jean-Louis; Bertin, Eric; Martens, Kirsten
2017-04-01
The rheological response of dense active matter is a topic of fundamental importance for many processes in nature such as the mechanics of biological tissues. One prominent way to probe mechanical properties of tissues is to study their response to externally applied forces. Using a particle-based model featuring random apoptosis and environment-dependent division rates, we evidence a crossover from linear flow to a shear-thinning regime with an increasing shear rate. To rationalize this nonlinear flow we derive a theoretical mean-field scenario that accounts for the interplay of mechanical and active noise in local stresses. These noises are, respectively, generated by the elastic response of the cell matrix to cell rearrangements and by the internal activity.
Fast Kalman Filter for Random Walk Forecast model
NASA Astrophysics Data System (ADS)
Saibaba, A.; Kitanidis, P. K.
2013-12-01
Kalman filtering is a fundamental tool in statistical time series analysis to understand the dynamics of large systems for which limited, noisy observations are available. However, standard implementations of the Kalman filter are prohibitive because they require O(N^2) in memory and O(N^3) in computational cost, where N is the dimension of the state variable. In this work, we focus our attention on the Random walk forecast model which assumes the state transition matrix to be the identity matrix. This model is frequently adopted when the data is acquired at a timescale that is faster than the dynamics of the state variables and there is considerable uncertainty as to the physics governing the state evolution. We derive an efficient representation for the a priori and a posteriori estimate covariance matrices as a weighted sum of two contributions - the process noise covariance matrix and a low rank term which contains eigenvectors from a generalized eigenvalue problem, which combines information from the noise covariance matrix and the data. We describe an efficient algorithm to update the weights of the above terms and the computation of eigenmodes of the generalized eigenvalue problem (GEP). The resulting algorithm for the Kalman filter with Random walk forecast model scales as O(N) or O(N log N), both in memory and computational cost. This opens up the possibility of real-time adaptive experimental design and optimal control in systems of much larger dimension than was previously feasible. For a small number of measurements (~ 300 - 400), this procedure can be made numerically exact. However, as the number of measurements increase, for several choices of measurement operators and noise covariance matrices, the spectrum of the (GEP) decays rapidly and we are justified in only retaining the dominant eigenmodes. We discuss tradeoffs between accuracy and computational cost. The resulting algorithms are applied to an example application from ray-based travel time tomography.
A Robust Self-Alignment Method for Ship's Strapdown INS Under Mooring Conditions
Sun, Feng; Lan, Haiyu; Yu, Chunyang; El-Sheimy, Naser; Zhou, Guangtao; Cao, Tong; Liu, Hang
2013-01-01
Strapdown inertial navigation systems (INS) need an alignment process to determine the initial attitude matrix between the body frame and the navigation frame. The conventional alignment process is to compute the initial attitude matrix using the gravity and Earth rotational rate measurements. However, under mooring conditions, the inertial measurement unit (IMU) employed in a ship's strapdown INS often suffers from both the intrinsic sensor noise components and the external disturbance components caused by the motions of the sea waves and wind waves, so a rapid and precise alignment of a ship's strapdown INS without any auxiliary information is hard to achieve. A robust solution is given in this paper to solve this problem. The inertial frame based alignment method is utilized to adapt the mooring condition, most of the periodical low-frequency external disturbance components could be removed by the mathematical integration and averaging characteristic of this method. A novel prefilter named hidden Markov model based Kalman filter (HMM-KF) is proposed to remove the relatively high-frequency error components. Different from the digital filters, the HMM-KF barely cause time-delay problem. The turntable, mooring and sea experiments favorably validate the rapidness and accuracy of the proposed self-alignment method and the good de-noising performance of HMM-KF. PMID:23799492
Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J
2011-10-07
Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.
Target detection in GPR data using joint low-rank and sparsity constraints
NASA Astrophysics Data System (ADS)
Bouzerdoum, Abdesselam; Tivive, Fok Hing Chi; Abeynayake, Canicious
2016-05-01
In ground penetrating radars, background clutter, which comprises the signals backscattered from the rough, uneven ground surface and the background noise, impairs the visualization of buried objects and subsurface inspections. In this paper, a clutter mitigation method is proposed for target detection. The removal of background clutter is formulated as a constrained optimization problem to obtain a low-rank matrix and a sparse matrix. The low-rank matrix captures the ground surface reflections and the background noise, whereas the sparse matrix contains the target reflections. An optimization method based on split-Bregman algorithm is developed to estimate these two matrices from the input GPR data. Evaluated on real radar data, the proposed method achieves promising results in removing the background clutter and enhancing the target signature.
A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.
2011-01-01
Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.
A robust color signal processing with wide dynamic range WRGB CMOS image sensor
NASA Astrophysics Data System (ADS)
Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi
2011-01-01
We have developed a robust color reproduction methodology by a simple calculation with a new color matrix using the formerly developed wide dynamic range WRGB lateral overflow integration capacitor (LOFIC) CMOS image sensor. The image sensor was fabricated through a 0.18 μm CMOS technology and has a 45 degrees oblique pixel array, the 4.2 μm effective pixel pitch and the W pixels. A W pixel was formed by replacing one of the two G pixels in the Bayer RGB color filter. The W pixel has a high sensitivity through the visible light waveband. An emerald green and yellow (EGY) signal is generated from the difference between the W signal and the sum of RGB signals. This EGY signal mainly includes emerald green and yellow lights. These colors are difficult to be reproduced accurately by the conventional simple linear matrix because their wave lengths are in the valleys of the spectral sensitivity characteristics of the RGB pixels. A new linear matrix based on the EGY-RGB signal was developed. Using this simple matrix, a highly accurate color processing with a large margin to the sensitivity fluctuation and noise has been achieved.
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
A renewal jump-diffusion process with threshold dividend strategy
NASA Astrophysics Data System (ADS)
Li, Bo; Wu, Rong; Song, Min
2009-06-01
In this paper, we consider a jump-diffusion risk process with the threshold dividend strategy. Both the distributions of the inter-arrival times and the claims are assumed to be in the class of phase-type distributions. The expected discounted dividend function and the Laplace transform of the ruin time are discussed. Motivated by Asmussen [S. Asmussen, Stationary distributions for fluid flow models with or without Brownian noise, Stochastic Models 11 (1) (1995) 21-49], instead of studying the original process, we study the constructed fluid flow process and their closed-form formulas are obtained in terms of matrix expression. Finally, numerical results are provided to illustrate the computation.
Akeroyd, Michael A; Arlinger, Stig; Bentler, Ruth A; Boothroyd, Arthur; Dillier, Norbert; Dreschler, Wouter A; Gagné, Jean-Pierre; Lutman, Mark; Wouters, Jan; Wong, Lena; Kollmeier, Birger
2015-01-01
To provide guidelines for the development of two types of closed-set speech-perception tests that can be applied and interpreted in the same way across languages. The guidelines cover the digit triplet and the matrix sentence tests that are most commonly used to test speech recognition in noise. They were developed by a working group on Multilingual Speech Tests of the International Collegium of Rehabilitative Audiology (ICRA). The recommendations are based on reviews of existing evaluations of the digit triplet and matrix tests as well as on the research experience of members of the ICRA Working Group. They represent the results of a consensus process. The resulting recommendations deal with: Test design and word selection; Talker characteristics; Audio recording and stimulus preparation; Masking noise; Test administration; and Test validation. By following these guidelines for the development of any new test of this kind, clinicians and researchers working in any language will be able to perform tests whose results can be compared and combined in cross-language studies.
NASA Astrophysics Data System (ADS)
Odinokov, S. B.; Petrov, A. V.
1995-10-01
Mathematical models of components of a vector-matrix optoelectronic multiplier are considered. Perturbing factors influencing a real optoelectronic system — noise and errors of radiation sources and detectors, nonlinearity of an analogue—digital converter, nonideal optical systems — are taken into account. Analytic expressions are obtained for relating the precision of such a multiplier to the probability of an error amounting to one bit, to the parameters describing the quality of the multiplier components, and to the quality of the optical system of the processor. Various methods of increasing the dynamic range of a multiplier are considered at the technical systems level.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
Approximate method of variational Bayesian matrix factorization/completion with sparse prior
NASA Astrophysics Data System (ADS)
Kawasumi, Ryota; Takeda, Koujin
2018-05-01
We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.
Effect of atmospherics on beamforming accuracy
NASA Technical Reports Server (NTRS)
Alexander, Richard M.
1990-01-01
Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as significantly as expected.
Removing non-stationary noise in spectrum sensing using matrix factorization
NASA Astrophysics Data System (ADS)
van Bloem, Jan-Willem; Schiphorst, Roel; Slump, Cornelis H.
2013-12-01
Spectrum sensing is key to many applications like dynamic spectrum access (DSA) systems or telecom regulators who need to measure utilization of frequency bands. The International Telecommunication Union (ITU) recommends a 10 dB threshold above the noise to decide whether a channel is occupied or not. However, radio frequency (RF) receiver front-ends are non-ideal. This means that the obtained data is distorted with noise and imperfections from the analog front-end. As part of the front-end the automatic gain control (AGC) circuitry mainly affects the sensing performance as strong adjacent signals lift the noise level. To enhance the performance of spectrum sensing significantly we focus in this article on techniques to remove the noise caused by the AGC from the sensing data. In order to do this we have applied matrix factorization techniques, i.e., SVD (singular value decomposition) and NMF (non-negative matrix factorization), which enables signal space analysis. In addition, we use live measurement results to verify the performance and to remove the effects of the AGC from the sensing data using above mentioned techniques, i.e., applied on block-wise available spectrum data. In this article it is shown that the occupancy in the industrial, scientific and medical (ISM) band, obtained by using energy detection (ITU recommended threshold), can be an overestimation of spectrum usage by 60%.
Andreev, Victor P; Rejtar, Tomas; Chen, Hsuan-Shen; Moskovets, Eugene V; Ivanov, Alexander R; Karger, Barry L
2003-11-15
A new denoising and peak picking algorithm (MEND, matched filtration with experimental noise determination) for analysis of LC-MS data is described. The algorithm minimizes both random and chemical noise in order to determine MS peaks corresponding to sample components. Noise characteristics in the data set are experimentally determined and used for efficient denoising. MEND is shown to enable low-intensity peaks to be detected, thus providing additional useful information for sample analysis. The process of denoising, performed in the chromatographic time domain, does not distort peak shapes in the m/z domain, allowing accurate determination of MS peak centroids, including low-intensity peaks. MEND has been applied to denoising of LC-MALDI-TOF-MS and LC-ESI-TOF-MS data for tryptic digests of protein mixtures. MEND is shown to suppress chemical and random noise and baseline fluctuations, as well as filter out false peaks originating from the matrix (MALDI) or mobile phase (ESI). In addition, MEND is shown to be effective for protein expression analysis by allowing selection of a large number of differentially expressed ICAT pairs, due to increased signal-to-noise ratio and mass accuracy.
Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.
Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald
2018-04-01
The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.
Treating Sample Covariances for Use in Strongly Coupled Atmosphere-Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Smith, Polly J.; Lawless, Amos S.; Nichols, Nancy K.
2018-01-01
Strongly coupled data assimilation requires cross-domain forecast error covariances; information from ensembles can be used, but limited sampling means that ensemble derived error covariances are routinely rank deficient and/or ill-conditioned and marred by noise. Thus, they require modification before they can be incorporated into a standard assimilation framework. Here we compare methods for improving the rank and conditioning of multivariate sample error covariance matrices for coupled atmosphere-ocean data assimilation. The first method, reconditioning, alters the matrix eigenvalues directly; this preserves the correlation structures but does not remove sampling noise. We show that it is better to recondition the correlation matrix rather than the covariance matrix as this prevents small but dynamically important modes from being lost. The second method, model state-space localization via the Schur product, effectively removes sample noise but can dampen small cross-correlation signals. A combination that exploits the merits of each is found to offer an effective alternative.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki
2017-01-01
This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974
On a stochastic control method for weakly coupled linear systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kwong, R. H.
1972-01-01
The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.
Füllgrabe, Christian; Rosen, Stuart
2016-01-01
With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, C., E-mail: cbarone@unisa.it; Mauro, C.; Pagano, S.
Carbon nanotubes added to polymer and epoxy matrices are compounds of interest for applications in electronics and aerospace. The realization of high-performance devices based on these materials can profit from the investigation of their electric noise properties, as this gives a more detailed insight of the basic charge carriers transport mechanisms at work. The dc and electrical noise characteristics of different polymer/carbon nanotubes composites have been analyzed from 10 to 300 K. The results suggest that all these systems can be regarded as random resistive networks of tunnel junctions formed by adjacent carbon nanotubes. However, in the high-temperature regime, contributions derivingmore » from other possible mechanisms cannot be separated using dc information alone. A transition from a fluctuation-induced tunneling process to a thermally activated regime is instead revealed by electric noise spectroscopy. In particular, a crossover is found from a two-level tunneling mechanism, operating at low temperatures, to resistance fluctuations of a percolative network, in the high-temperature region. The observed behavior of 1/f noise seems to be a general feature for highly conductive samples, independent on the type of polymer matrix and on the nanotube density.« less
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
NASA Technical Reports Server (NTRS)
Costanza, Bryan T.; Horne, William C.; Schery, S. D.; Babb, Alex T.
2011-01-01
The Aero-Physics Branch at NASA Ames Research Center utilizes a 32- by 48-inch subsonic wind tunnel for aerodynamics research. The feasibility of acquiring acoustic measurements with a phased microphone array was recently explored. Acoustic characterization of the wind tunnel was carried out with a floor-mounted 24-element array and two ceiling-mounted speakers. The minimum speaker level for accurate level measurement was evaluated for various tunnel speeds up to a Mach number of 0.15 and streamwise speaker locations. A variety of post-processing procedures, including conventional beamforming and deconvolutional processing such as TIDY, were used. The speaker measurements, with and without flow, were used to compare actual versus simulated in-flow speaker calibrations. Data for wind-off speaker sound and wind-on tunnel background noise were found valuable for predicting sound levels for which the speakers were detectable when the wind was on. Speaker sources were detectable 2 - 10 dB below the peak background noise level with conventional data processing. The effectiveness of background noise cross-spectral matrix subtraction was assessed and found to improve the detectability of test sound sources by approximately 10 dB over a wide frequency range.
Chao, Jerry; Ward, E. Sally; Ober, Raimund J.
2012-01-01
The high quantum efficiency of the charge-coupled device (CCD) has rendered it the imaging technology of choice in diverse applications. However, under extremely low light conditions where few photons are detected from the imaged object, the CCD becomes unsuitable as its readout noise can easily overwhelm the weak signal. An intended solution to this problem is the electron-multiplying charge-coupled device (EMCCD), which stochastically amplifies the acquired signal to drown out the readout noise. Here, we develop the theory for calculating the Fisher information content of the amplified signal, which is modeled as the output of a branching process. Specifically, Fisher information expressions are obtained for a general and a geometric model of amplification, as well as for two approximations of the amplified signal. All expressions pertain to the important scenario of a Poisson-distributed initial signal, which is characteristic of physical processes such as photon detection. To facilitate the investigation of different data models, a “noise coefficient” is introduced which allows the analysis and comparison of Fisher information via a scalar quantity. We apply our results to the problem of estimating the location of a point source from its image, as observed through an optical microscope and detected by an EMCCD. PMID:23049166
Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments
NASA Technical Reports Server (NTRS)
Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.
1973-01-01
A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.
1975-09-30
systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure
Performance assessment of a data processing chain for THz imaging
NASA Astrophysics Data System (ADS)
Catapano, Ilaria; Ludeno, Giovanni; Soldovieri, Francesco
2017-04-01
Nowadays, TeraHertz (THz) imaging is deserving huge attention as very high resolution diagnostic tool in many applicative fields, among which security, cultural heritage, material characterization and civil engineering diagnostics. This widespread use of THz waves is due to their non-ionizing nature, their capability of penetrating into non-metallic opaque materials, as well as to the technological advances, which have allowed the commercialization of compact, flexible and portable systems. However, the effectiveness of THz imaging depends strongly on the adopted data processing aimed at improving the imaging performance of the hardware device. In particular, data processing is required to mitigate detrimental and unavoidable effects like noise, signal attenuation, as well as to correct the sample surface topography. With respect to data processing, we have proposed recently a strategy involving three different steps aimed at reducing noise, filtering out undesired signal introduced by the adopted THz system and performing surface topography correction [1]. The first step regards noise filtering and exploits a procedure based on the Singular Value Decomposition (SVD) [2] of the data matrix, which does not require knowledge of noise level and it does not involve the use of a reference signal. The second step aims at removing the undesired signal that we have experienced to be introduced by the adopted Z-Omega Fiber-Coupled Terahertz Time Domain (FICO) system. Indeed, when the system works in a high-speed mode, an undesired low amplitude peak occurs always at the same time instant from the beginning of the observation time window and needs to be removed from the useful data matrix in order to avoid a wrong interpretation of the imaging results. The third step of the considered data processing chain is a topographic correction, which needs in order to image properly the samples surface and its inner structure. Such a procedure performs an automatic alignment of the first peak of the measured waveforms by exploiting the a-priori information on the focus distance at which the specimen under test must be located during the measurement phase. The usefulness of the proposed data processing chain has been widely assessed in the last few months by surveying several specimens made by different materials and representative of objects of interest for civil engineering and cultural heritage diagnostics. At the conference, we will show in detail the signal processing chain and present several achieved results. REFERENCES [1] I. Catapano, F. Soldovieri, "A Data Processing Chain for Terahertz Imaging and Its Use in Artwork Diagnostics". J Infrared Milli Terahz Waves, pp.13, Nov. 2016. [2] M. Bertero and P. Boccacci (1998), Introduction to Inverse Problems in Imaging, Bristol: Institute of Physics Publishing.
NASA Technical Reports Server (NTRS)
Horne, William C.
2011-01-01
Measurements of background noise were recently obtained with a 24-element phased microphone array in the test section of the Arnold Engineering Development Center 80- by120-Foot Wind Tunnel at speeds of 50 to 100 knots (27.5 to 51.4 m/s). The array was mounted in an aerodynamic fairing positioned with array center 1.2m from the floor and 16 m from the tunnel centerline, The array plate was mounted flush with the fairing surface as well as recessed in. (1.27 cm) behind a porous Kevlar screen. Wind-off speaker measurements were also acquired every 15 on a 10 m semicircular arc to assess directional resolution of the array with various processing algorithms, and to estimate minimum detectable source strengths for future wind tunnel aeroacoustic studies. The dominant background noise of the facility is from the six drive fans downstream of the test section and first set of turning vanes. Directional array response and processing methods such as background-noise cross-spectral-matrix subtraction suggest that sources 10-15 dB weaker than the background can be detected.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
Assessment of Infrared Sounder Radiometric Noise from Analysis of Spectral Residuals
NASA Astrophysics Data System (ADS)
Dufour, E.; Klonecki, A.; Standfuss, C.; Tournier, B.; Serio, C.; Masiello, G.; Tjemkes, S.; Stuhlmann, R.
2016-08-01
For the preparation and performance monitoring of the future generation of hyperspectral InfraRed sounders dedicated to the precise vertical profiling of the atmospheric state, such as the Meteosat Third Generation hyperspectral InfraRed Sounder, a reliable assessment of the instrument radiometric error covariance matrix is needed.Ideally, an inflight estimation of the radiometrric noise is recommended as certain sources of noise can be driven by the spectral signature of the observed Earth/ atmosphere radiance. Also, unknown correlated noise sources, generally related to incomplete knowledge of the instrument state, can be present, so a caracterisation of the noise spectral correlation is also neeed.A methodology, relying on the analysis of post-retreival spectral residuals, is designed and implemented to derive in-flight the covariance matrix on the basis of Earth scenes measurements. This methodology is successfully demonstrated using IASI observations as MTG-IRS proxy data and made it possible to highlight anticipated correlation structures explained by apodization and micro-vibration effects (ghost). This analysis is corroborated by a parallel estimation based on an IASI black body measurement dataset and the results of an independent micro-vibration model.
Development of the Russian matrix sentence test.
Warzybok, Anna; Zokoll, Melanie; Wardenga, Nina; Ozimek, Edward; Boboshko, Maria; Kollmeier, Birger
2015-01-01
To develop the Russian matrix sentence test for speech intelligibility measurements in noise. Test development included recordings, optimization of speech material, and evaluation to investigate the equivalency of the test lists and training. For each of the 500 test items, the speech intelligibility function, speech reception threshold (SRT: signal-to-noise ratio, SNR, that provides 50% speech intelligibility), and slope was obtained. The speech material was homogenized by applying level corrections. In evaluation measurements, speech intelligibility was measured at two fixed SNRs to compare list-specific intelligibility functions. To investigate the training effect and establish reference data, speech intelligibility was measured adaptively. Overall, 77 normal-hearing native Russian listeners. The optimization procedure decreased the spread in SRTs across words from 2.8 to 0.6 dB. Evaluation measurements confirmed that the 16 test lists were equivalent, with a mean SRT of -9.5 ± 0.2 dB and a slope of 13.8 ± 1.6%/dB. The reference SRT, -8.8 ± 0.8 dB for the open-set and -9.4 ± 0.8 dB for the closed-set format, increased slightly for noise levels above 75 dB SPL. The Russian matrix sentence test is suitable for accurate and reliable speech intelligibility measurements in noise.
Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Miura, Masahiro; Yasuno, Yoshiaki
2016-01-01
This paper describes a complex correlation mapping algorithm for optical coherence angiography (cmOCA). The proposed algorithm avoids the signal-to-noise ratio dependence and exhibits low noise in vasculature imaging. The complex correlation coefficient of the signals, rather than that of the measured data are estimated, and two-step averaging is introduced. Algorithms of motion artifact removal based on non perfusing tissue detection using correlation are developed. The algorithms are implemented with Jones-matrix OCT. Simultaneous imaging of pigmented tissue and vasculature is also achieved using degree of polarization uniformity imaging with cmOCA. An application of cmOCA to in vivo posterior human eyes is presented to demonstrate that high-contrast images of patients’ eyes can be obtained. PMID:27446673
Using Network Theory to Understand Seismic Noise in Dense Arrays
NASA Astrophysics Data System (ADS)
Riahi, N.; Gerstoft, P.
2015-12-01
Dense seismic arrays offer an opportunity to study anthropogenic seismic noise sources with unprecedented detail. Man-made sources typically have high frequency, low intensity, and propagate as surface waves. As a result attenuation restricts their measurable footprint to a small subset of sensors. Medium heterogeneities can further introduce wave front perturbations that limit processing based on travel time. We demonstrate a non-parametric technique that can reliably identify very local events within the array as a function of frequency and time without using travel-times. The approach estimates the non-zero support of the array covariance matrix and then uses network analysis tools to identify clusters of sensors that are sensing a common source. We verify the method on simulated data and then apply it to the Long Beach (CA) geophone array. The method exposes a helicopter traversing the array, oil production facilities with different characteristics, and the fact that noise sources near roads tend to be around 10-20 Hz.
Multi beam observations of cosmic radio noise using a VHF radar with beam forming by a Butler matrix
NASA Astrophysics Data System (ADS)
Renkwitz, T.; Singer, W.; Latteck, R.; Rapp, M.
2011-08-01
The Leibniz-Institute of Atmospheric Physics (IAP) in Kühlungsborn started to install a new MST radar on the North-Norwegian island Andøya (69.30° N, 16.04° E) in 2009. The new Middle Atmosphere Alomar Radar System (MAARSY) replaces the previous ALWIN radar which has been successfully operated for more than 10 years. The MAARSY radar provides increased temporal and spatial resolution combined with a flexible sequential point-to-point steering of the radar beam. To increase the spatiotemporal resolution of the observations a 16-port Butler matrix has been built and implemented to the radar. In conjunction with 64 Yagi antennas of the former ALWIN antenna array the Butler matrix simultaneously provides 16 individual beams. The beam forming capability of the Butler matrix arrangement has been verified observing the galactic cosmic radio noise of the supernova remnant Cassiopeia A. Furthermore, this multi beam configuration has been used in passive experiments to estimate the cosmic noise absorption at 53.5 MHz during events of enhanced solar and geomagnetic activity as indicators for enhanced ionization at altitudes below 90 km. These observations are well correlated with simultaneous observations of corresponding beams of the co-located imaging riometer AIRIS (69.14° N, 16.02° E) at 38.2 MHz. In addition, enhanced cosmic noise absorption goes along with enhanced electron densities at altitudes below about 90 km as observed with the co-located Saura MF radar using differential absorption and differential phase measurements.
Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Horne, William C.
2015-01-01
An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.
Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery
Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin
2017-01-01
This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565
NASA Astrophysics Data System (ADS)
Wild, Walter James
1988-12-01
External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-01-01
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069
2007-07-21
the spin coherent states P-representation", Conference on Quantum Computations and Many- Body Systems, February 2006, Key West, FL 9. B. N. Harmon...solid-state spin-based qubit systems was the focus of our project. Since decoherence is a complex many- body non-equilibrium process, and its...representation of the density matrix, see Sec. 3 below). This work prompted J. Taylor from the experimental group of C. Marcus and M. Lukin (funded by
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation
NASA Technical Reports Server (NTRS)
Richardson, A. O.
1996-01-01
This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Berman, Gennady P; Nesterov, Alexander I; Gurvitz, Shmuel; Sayre, Richard T
2017-01-01
We analyze theoretically a simple and consistent quantum mechanical model that reveals the possible role of quantum interference, protein noise, and sink effects in the nonphotochemical quenching (NPQ) in light-harvesting complexes (LHCs). The model consists of a network of five interconnected sites (excitonic states of light-sensitive molecules) responsible for the NPQ mechanism. The model also includes the "damaging" and the dissipative channels. The damaging channel is responsible for production of singlet oxygen and other destructive outcomes. In our model, both damaging and "dissipative" charge transfer channels are described by discrete electron energy levels attached to their sinks, that mimic the continuum part of electron energy spectrum. All five excitonic sites interact with the protein environment that is modeled using a stochastic process. Our approach allowed us to derive the exact and closed system of linear ordinary differential equations for the reduced density matrix and its first momentums. These equations are solved numerically including for strong interactions between the light-sensitive molecules and protein environment. As an example, we apply our model to demonstrate possible contributions of quantum interference, protein noise, and sink effects in the NPQ mechanism in the CP29 minor LHC. The numerical simulations show that using proper combination of quantum interference effects, properties of noise, and sinks, one can significantly suppress the damaging channel. Our findings demonstrate the possible role of interference, protein noise, and sink effects for modeling, engineering, and optimizing the performance of the NPQ processes in both natural and artificial light-harvesting complexes.
Berman, Gennady P.; Nesterov, Alexander I.; Gurvitz, Shmuel; ...
2016-04-30
Here, we analyze theoretically a simple and consistent quantum mechanical model that reveals the possible role of quantum interference, protein noise, and sink effects in the nonphotochemical quenching (NPQ) in light-harvesting complexes (LHCs). The model consists of a network of five interconnected sites (excitonic states of light-sensitive molecules) responsible for the NPQ mechanism. The model also includes the “damaging” and the dissipative channels. The damaging channel is responsible for production of singlet oxygen and other destructive outcomes. In this model, both damaging and “dissipative” charge transfer channels are described by discrete electron energy levels attached to their sinks, that mimicmore » the continuum part of electron energy spectrum. All five excitonic sites interact with the protein environment that is modeled using a stochastic process. Our approach allowed us to derive the exact and closed system of linear ordinary differential equations for the reduced density matrix and its first momentums. Moreover, these equations are solved numerically including for strong interactions between the light-sensitive molecules and protein environment. As an example, we apply our model to demonstrate possible contributions of quantum interference, protein noise, and sink effects in the NPQ mechanism in the CP29 minor LHC. The numerical simulations show that using proper combination of quantum interference effects, properties of noise, and sinks, one can significantly suppress the damaging channel. Finally, our findings demonstrate the possible role of interference, protein noise, and sink effects for modeling, engineering, and optimizing the performance of the NPQ processes in both natural and artificial light-harvesting complexes.« less
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Ballistic Missile Defense Glossary Version 3.0.
1997-06-01
The suppression of background noise for the improvement of an object signal. Battlefield Area Evaluation (USA term). Best and Final Offer...field of the lens are focused. An FPA is a matrix of photon sensitive detectors which, when combined with low noise preamplifiers, provides image data...orbital planes with an orbit period of 12 hours at 10,900 nautical miles altitude. Each satellite transmits three L-band, pseudo-random noise -coded
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
NASA Technical Reports Server (NTRS)
Kiser, J. Douglas; Grady, Joseph E.; Miller, Christopher J.; Hultgren, Lennart S.; Jones, Michael G.
2016-01-01
Recent developments have reduced fan and jet noise contributions to overall subsonic aircraft jet-engine noise. Now, aircraft designers are turning their attention toward reducing engine core noise. The NASA Glenn Research Center and NASA Langley Research Center have teamed to investigate the development of a compact, lightweight acoustic liner based on oxide/oxide ceramic matrix composite (CMC) materials. The NASA team has built upon an existing oxide/oxide CMC sandwich structure concept that provides monotonal noise reduction. Oxide/oxide composites have good high temperature strength and oxidation resistance, which could allow them to perform as core liners at temperatures up to 1000C (1832F), and even higher depending on the selection of the composite constituents. NASA has initiated the evaluation of CMC-based liners that use cells of different lengths (variable-depth channels) or effective lengths to achieve broadband noise reduction. Reducing the overall liner thickness is also a major goal, to minimize the volume occupied by the liner. As a first step toward demonstrating the feasibility of our concepts, an oxide/oxide CMC acoustic testing article with different channel lengths was tested. Our approach, summary of test results, current status, and goals for the future are reported.
The spatiotemporal MEG covariance matrix modeled as a sum of Kronecker products.
Bijma, Fetsje; de Munck, Jan C; Heethaar, Rob M
2005-08-15
The single Kronecker product (KP) model for the spatiotemporal covariance of MEG residuals is extended to a sum of Kronecker products. This sum of KP is estimated such that it approximates the spatiotemporal sample covariance best in matrix norm. Contrary to the single KP, this extension allows for describing multiple, independent phenomena in the ongoing background activity. Whereas the single KP model can be interpreted by assuming that background activity is generated by randomly distributed dipoles with certain spatial and temporal characteristics, the sum model can be physiologically interpreted by assuming a composite of such processes. Taking enough terms into account, the spatiotemporal sample covariance matrix can be described exactly by this extended model. In the estimation of the sum of KP model, it appears that the sum of the first 2 KP describes between 67% and 93%. Moreover, these first two terms describe two physiological processes in the background activity: focal, frequency-specific alpha activity, and more widespread non-frequency-specific activity. Furthermore, temporal nonstationarities due to trial-to-trial variations are not clearly visible in the first two terms, and, hence, play only a minor role in the sample covariance matrix in terms of matrix power. Considering the dipole localization, the single KP model appears to describe around 80% of the noise and seems therefore adequate. The emphasis of further improvement of localization accuracy should be on improving the source model rather than the covariance model.
WE-FG-207B-04: Noise Suppression for Energy-Resolved CT Via Variance Weighted Non-Local Filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, J; Zhu, L
Purpose: The photon starvation problem is exacerbated in energy-resolved CT, since the detected photons are shared by multiple energy channels. Using pixel similarity-based non-local filtration, we aim to produce accurate and high-resolution energy-resolved CT images with significantly reduced noise. Methods: Averaging CT images reconstructed from different energy channels reduces noise at the price of losing spectral information, while conventional denoising techniques inevitably degrade image resolution. Inspired by the fact that CT images of the same object at different energies share the same structures, we aim to reduce noise of energy-resolved CT by averaging only pixels of similar materials - amore » non-local filtration technique. For each CT image, an empirical exponential model is used to calculate the material similarity between two pixels based on their CT values and the similarity values are organized in a matrix form. A final similarity matrix is generated by averaging these similarity matrices, with weights inversely proportional to the estimated total noise variance in the sinogram of different energy channels. Noise suppression is achieved for each energy channel via multiplying the image vector by the similarity matrix. Results: Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. On a low-dose acquisition at 15 mA of the Catphan©600 phantom, our method achieves the same image spatial resolution as a high-dose scan at 80 mA with a noise standard deviation (STD) lower by a factor of >2. Compared with another non-local noise suppression algorithm (ndiNLM), the proposed algorithms obtains images with substantially improved resolution at the same level of noise reduction. Conclusion: We propose a noise-suppression method for energy-resolved CT. Our method takes full advantage of the additional structural information provided by energy-resolved CT and preserves image values at each energy level. Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R21EB019597. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Analytical approximations to the Hotelling trace for digital x-ray detectors
NASA Astrophysics Data System (ADS)
Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.
2001-06-01
The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan{sup ©}600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution.« less
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei
2016-01-01
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan©600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution. PMID:27147376
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
Quantum critical probing and simulation of colored quantum noise
NASA Astrophysics Data System (ADS)
Mascarenhas, Eduardo; de Vega, Inés
2017-12-01
We propose a protocol to simulate the evolution of a non-Markovian open quantum system by considering a collisional process with a many-body system, which plays the role of an environment. As a result of our protocol, the environment spatial correlations are mapped into the time correlations of a noise that drives the dynamics of the open system. Considering the weak coupling limit, the open system can also be considered as a probe of the environment properties. In this regard, when preparing the environment in its ground state, a measurement of the dynamics of the open system allows to determine the length of the environment spatial correlations and therefore its critical properties. To illustrate our proposal we simulate the full system dynamics with matrix-product-states and compare this to the reduced dynamics obtained with an approximated variational master equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gongzhang, R.; Xiao, B.; Lardner, T.
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less
Stochastic noise characteristics in matrix inversion tomosynthesis (MITS).
Godfrey, Devon J; McAdams, H P; Dobbins, James T Third
2009-05-01
Matrix inversion tomosynthesis (MITS) uses known imaging geometry and linear systems theory to deterministically separate in-plane detail from residual tomographic blur in a set of conventional tomosynthesis ("shift-and-add") planes. A previous investigation explored the effect of scan angle (ANG), number of projections (N), and number of reconstructed planes (NP) on the MITS impulse response and modulation transfer function characteristics, and concluded that ANG = 20 degrees, N = 71, and NP = 69 is the optimal MITS imaging technique for chest imaging on our prototype tomosynthesis system. This article examines the effect of ANG, N, and NP on the MITS exposure-normalized noise power spectra (ENNPS) and seeks to confirm that the imaging parameters selected previously by an analysis of the MITS impulse response also yield reasonable stochastic properties in MITS reconstructed planes. ENNPS curves were generated for experimentally acquired mean-subtracted projection images, conventional tomosynthesis planes, and MITS planes with varying combinations of the parameters ANG, N, and NP. Image data were collected using a prototype tomosynthesis system, with 11.4 cm acrylic placed near the image receptor to produce lung-equivalent beam hardening and scattered radiation. Ten identically acquired tomosynthesis data sets (realizations) were collected for each selected technique and used to generate ensemble mean images that were subtracted from individual image realizations prior to noise power spectra (NPS) estimation. NPS curves were normalized to account for differences in entrance exposure (as measured with an ion chamber), yielding estimates of the ENNPS for each technique. Results suggest that mid- and high-frequency noise in MITS planes is fairly equivalent in magnitude to noise in conventional tomosynthesis planes, but low-frequency noise is amplified in the most anterior and posterior reconstruction planes. Selecting the largest available number of projections (N = 71) does not incur any appreciable additive electronic noise penalty compared to using fewer projections for roughly equivalent cumulative exposure. Stochastic noise is minimized by maximizing N and NP but increases with increasing ANG. The noise trend results for NP and ANG are contrary to what would be predicted by simply considering the MITS matrix conditioning and likely result from the interplay between noise correlation and the polarity of the MITS filters. From this study, the authors conclude that the previously determined optimal MITS imaging strategy based on impulse response considerations produces somewhat suboptimal stochastic noise characteristics, but is probably still the best technique for MITS imaging of the chest.
Spillover stabilization and decentralized modal control of large space structures
NASA Technical Reports Server (NTRS)
Czajkowski, Eva A.; Preumont, Andre
1987-01-01
The stabilization of the neglected dynamics of the higher modes of vibration in large space structures is studied, and the influence of the structure of the plant noise intensity matrix of the Kalman-Bucy filter on the stability margin of the residual modes is shown. An optimization procedure uses information on the residual modes to minimize spillover of known residual modes while preserving robustness with respect to the unknown dynamics, and the optimum plant noise intensity matrix is selected to maximize the stability margins of the residual modes and to properly place the observer poles. Examples for both centralized and decentralized control are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schäfer, Joachim; Karpov, Evgueni; Cerf, Nicolas J.
2014-12-04
We seek for a realistic implementation of multimode Gaussian entangled states that can realize the optimal encoding for quantum bosonic Gaussian channels with memory. For a Gaussian channel with classical additive Markovian correlated noise and a lossy channel with non-Markovian correlated noise, we demonstrate the usefulness using Gaussian matrix-product states (GMPS). These states can be generated sequentially, and may, in principle, approximate well any Gaussian state. We show that we can achieve up to 99.9% of the classical Gaussian capacity with GMPS requiring squeezing parameters that are reachable with current technology. This may offer a way towards an experimental realization.
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
NASA Astrophysics Data System (ADS)
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.
A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method
Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang
2016-01-01
Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194
Frequency-Domain Analysis of Diffusion-Cooled Hot-Electron Bolometer Mixers
NASA Technical Reports Server (NTRS)
Skalare, A.; McGrath, W. R.; Bumble, B.; LeDuc, H. G.
1998-01-01
A new theoretical model is introduced to describe heterodyne mixer conversion efficiency and noise (from thermal fluctuation effects) in diffusion-cooled superconducting hot-electron bolometers. The model takes into account the non-uniform internal electron temperature distribution generated by Wiedemann-Franz heat conduction, and accepts for input an arbitrary (analytical or experimental) superconducting resistance-versus- temperature curve. A non-linear large-signal solution is solved iteratively to calculate the temperature distribution, and a linear frequency-domain small-signal formulation is used to calculate conversion efficiency and noise. In the small-signal solution the device is discretized into segments, and matrix algebra is used to relate the heating modulation in the segments to temperature and resistance modulations. Matrix expressions are derived that allow single-sideband mixer conversion efficiency and coupled noise power to be directly calculated. The model accounts for self-heating and electrothermal feedback from the surrounding bias circuit.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.
2014-06-01
The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.
Hochmuth, Sabine; Jürgens, Tim; Brand, Thomas; Kollmeier, Birger
2015-01-01
Investigate talker- and language-specific aspects of speech intelligibility in noise and reverberation using highly comparable matrix sentence tests across languages. Matrix sentences spoken by German/Russian and German/Spanish bilingual talkers were recorded. These sentences were used to measure speech reception thresholds (SRTs) with native listeners in the respective languages in different listening conditions (stationary and fluctuating noise, multi-talker babble, reverberated speech-in-noise condition). Four German/Russian and four German/Spanish bilingual talkers; 20 native German-speaking, 10 native Russian-speaking, and 10 native Spanish-speaking listeners. Across-talker SRT differences of up to 6 dB were found for both groups of bilinguals. SRTs of German/Russian bilingual talkers were the same in both languages. SRTs of German/Spanish bilingual talkers were higher when they talked in Spanish than when they talked in German. The benefit from listening in the gaps was similar across all languages. The detrimental effect of reverberation was larger for Spanish than for German and Russian. Within the limitations set by the number and slight accentedness of talkers and other possible confounding factors, talker- and test-condition-dependent differences were isolated from the language effect: Russian and German exhibited similar intelligibility in noise and reverberation, whereas Spanish was more impaired in these situations.
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Bacteriorhodopsin films for optical signal processing and data storage
NASA Technical Reports Server (NTRS)
Walkup, John F. (Principal Investigator); Mehrl, David J. (Principal Investigator)
1996-01-01
This report summarizes the research results obtained on NASA Ames Grant NAG 2-878 entitled 'Investigations of Bacteriorhodopsin Films for Optical Signal Processing and Data Storage.' Specifically we performed research, at Texas Tech University, on applications of Bacteriorhodopisin film to both (1) dynamic spatial filtering and (2) holographic data storage. In addition, measurements of the noise properties of an acousto-optical matrix-vestor multiplier built for NASA Ames by Photonic Systems Inc. were performed at NASA Ames' Photonics Laboratory. This research resulted in two papers presented at major optical data processing conferences and a journal paper which is to appear in APPLIED OPTICS. A new proposal for additional BR research has recently been submitted to NASA Ames Research Center.
Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics
NASA Astrophysics Data System (ADS)
Iyer, V.; Shetty, S.; Iyengar, S. S.
2015-07-01
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
Noise Analysis of Spatial Phase coding in analog Acoustooptic Processors
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Optical beams can carry information in their amplitude and phase; however, optical analog numerical calculators such as an optical matrix processor use incoherent light to achieve linear operation. Thus, the phase information is lost and only the magnitude can be used. This limits such processors to the representation of positive real numbers. Many systems have been devised to overcome this deficit through the use of digital number representations, but they all operate at a greatly reduced efficiency in contrast to analog systems. The most widely accepted method to achieve sign coding in analog optical systems has been the use of an offset for the zero level. Unfortunately, this results in increased noise sensitivity for small numbers. In this paper, we examine the use of spatially coherent sign coding in acoustooptical processors, a method first developed for digital calculations by D. V. Tigin. This coding technique uses spatial coherence for the representation of signed numbers, while temporal incoherence allows for linear analog processing of the optical information. We show how spatial phase coding reduces noise sensitivity for signed analog calculations.
Extended Kalman filtering for the detection of damage in linear mechanical structures
NASA Astrophysics Data System (ADS)
Liu, X.; Escamilla-Ambrosio, P. J.; Lieven, N. A. J.
2009-09-01
This paper addresses the problem of assessing the location and extent of damage in a vibrating structure by means of vibration measurements. Frequency domain identification methods (e.g. finite element model updating) have been widely used in this area while time domain methods such as the extended Kalman filter (EKF) method, are more sparsely represented. The difficulty of applying EKF in mechanical system damage identification and localisation lies in: the high computational cost, the dependence of estimation results on the initial estimation error covariance matrix P(0), the initial value of parameters to be estimated, and on the statistics of measurement noise R and process noise Q. To resolve these problems in the EKF, a multiple model adaptive estimator consisting of a bank of EKF in modal domain was designed, each filter in the bank is based on different P(0). The algorithm was iterated by using the weighted global iteration method. A fuzzy logic model was incorporated in each filter to estimate the variance of the measurement noise R. The application of the method is illustrated by simulated and real examples.
Parameter estimation using weighted total least squares in the two-compartment exchange model.
Garpebring, Anders; Löfstedt, Tommy
2018-01-01
The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Martens, J.S.; Hietala, V.M.; Plut, T.A.
1995-01-03
The present invention comprises a novel matrix amplifier. The matrix amplifier includes an active superconducting power divider (ASPD) having N output ports; N distributed amplifiers each operatively connected to one of the N output ports of the ASPD; and a power combiner having N input ports each operatively connected to one of the N distributed amplifiers. The distributed amplifier can included M stages of amplification by cascading superconducting active devices. The power combiner can include N active elements. The resulting (N[times]M) matrix amplifier can produce signals of high output power, large bandwidth, and low noise. 6 figures.
Martens, Jon S.; Hietala, Vincent M.; Plut, Thomas A.
1995-01-01
The present invention comprises a novel matrix amplifier. The matrix amplifier includes an active superconducting power divider (ASPD) having N output ports; N distributed amplifiers each operatively connected to one of the N output ports of the ASPD; and a power combiner having N input ports each operatively connected to one of the N distributed amplifiers. The distributed amplifier can included M stages of amplification by cascading superconducting active devices. The power combiner can include N active elements. The resulting (N.times.M) matrix amplifier can produce signals of high output power, large bandwidth, and low noise.
Hong-Ping, Xie; Jian-Hui, Jiang; Guo-Li, Shen; Ru-Qin, Yu
2002-01-01
A new approach for estimating the chemical rank of the three-way array called the principal norm vector orthogonal projection method has been proposed. The method is based on the fact that the chemical rank of the three-way data array is equal to one of the column space of the unfolded matrix along the spectral or chromatographic mode. A vector with maximum Frobenius norm is selected among all the column vectors of the unfolded matrix as the principal norm vector (PNV). A transformation is conducted for the column vectors with an orthogonal projection matrix formulated by PNV. The mathematical rank of the column space of the residual matrix thus obtained should decrease by one. Such orthogonal projection is carried out repeatedly till the contribution of chemical species to the signal data is all deleted. At this time the decrease of the mathematical rank would equal that of the chemical rank, and the remaining residual subspace would entirely be due to the noise contribution. The chemical rank can be estimated easily by using an F-test. The method has been used successfully to the simulated HPLC-DAD type three-way data array and two real excitation-emission fluorescence data sets of amino acid mixtures and dye mixtures. The simulation with added relatively high level noise shows that the method is robust in resisting the heteroscedastic noise. The proposed algorithm is simple and easy to program with quite light computational burden.
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling
2018-01-01
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Highly sensitive bacterial susceptibility test against penicillin using parylene-matrix chip.
Park, Jong-Min; Kim, Jo-Il; Song, Hyun-Woo; Noh, Joo-Yoon; Kang, Min-Jung; Pyun, Jae-Chul
2015-09-15
This work presented a highly sensitive bacterial antibiotic susceptibility test through β-lactamase assay using Parylene-matrix chip. β-lactamases (EC 3.5.2.6) are an important family of enzymes that confer resistance to β-lactam antibiotics by catalyzing the hydrolysis of these antibiotics. Here we present a highly sensitive assay to quantitate β-lactamase-mediated hydrolysis of penicillin into penicilloic acid. Typically, MALDI-TOF mass spectrometry has been used to quantitate low molecular weight analytes and to discriminate them from noise peaks of matrix fragments that occur at low m/z ratios (m/z<500). The β-lactamase assay for the Escherichia coli antibiotic susceptibility test was carried out using Parylene-matrix chip and MALDI-TOF mass spectrometry. The Parylene-matrix chip was successfully used to quantitate penicillin (m/z: [PEN+H](+)=335.1 and [PEN+Na](+)=357.8) and penicilloic acid (m/z: [PA+H](+)=353.1) in a β-lactamase assay with minimal interference of low molecular weight noise peaks. The β-lactamase assay was carried out with an antibiotic-resistant E. coli strain and an antibiotic-susceptible E. coli strain, revealing that the minimum number of E. coli cells required to screen for antibiotic resistance was 1000 cells for the MALDI-TOF mass spectrometry/Parylene-matrix chip assay. Copyright © 2015 Elsevier B.V. All rights reserved.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
A Spanish matrix sentence test for assessing speech reception thresholds in noise.
Hochmuth, Sabine; Brand, Thomas; Zokoll, Melanie A; Castro, Franz Zenker; Wardenga, Nina; Kollmeier, Birger
2012-07-01
To develop, optimize, and evaluate a new Spanish sentence test in noise. The test comprises a basic matrix of ten names, verbs, numerals, nouns, and adjectives. From this matrix, test lists of ten sentences with an equal syntactical structure can be formed at random, with each list containing the whole speech material. The speech material represents the phoneme distribution of the Spanish language. The test was optimized for measuring speech reception thresholds (SRTs) in noise by adjusting the presentation levels of the individual words. Subsequently, the test was evaluated by independent measurements investigating the training effects, the comparability of test lists, open-set vs. closed-set test format, and performance of listeners of different Spanish varieties. In total, 68 normal-hearing native Spanish-speaking listeners. SRTs measured using an adaptive procedure were 6.2 ± 0.8 dB SNR for the open-set and 7.2 ± 0.7 dB SNR for the closed-set test format. The residual training effect was less than 1 dB after using two double-lists before data collection. No significant differences were found for listeners of different Spanish varieties indicating that the test is applicable to Spanish as well as Latin American listeners. Test lists can be used interchangeably.
Non-Abelian Geometric Phases Carried by the Quantum Noise Matrix
NASA Astrophysics Data System (ADS)
Bharath, H. M.; Boguslawski, Matthew; Barrios, Maryrose; Chapman, Michael
2017-04-01
Topological phases of matter are characterized by topological order parameters that are built using Berry's geometric phase. Berry's phase is the geometric information stored in the overall phase of a quantum state. We show that geometric information is also stored in the second and higher order spin moments of a quantum spin system, captured by a non-abelian geometric phase. The quantum state of a spin-S system is uniquely characterized by its spin moments up to order 2S. The first-order spin moment is the spin vector, and the second-order spin moment represents the spin fluctuation tensor, i.e., the quantum noise matrix. When the spin vector is transported along a loop in the Bloch ball, we show that the quantum noise matrix picks up a geometric phase. Considering spin-1 systems, we formulate this geometric phase as an SO(3) operator. Geometric phases are usually interpreted in terms of the solid angle subtended by the loop at the center. However, solid angles are not well defined for loops that pass through the center. Here, we introduce a generalized solid angle which is well defined for all loops inside the Bloch ball, in terms of which, we interpret the SO(3) geometric phase. This geometric phase can be used to characterize topological spin textures in cold atomic clouds.
NASA Astrophysics Data System (ADS)
Skare, Stefan; Hedehus, Maj; Moseley, Michael E.; Li, Tie-Qiang
2000-12-01
Diffusion tensor mapping with MRI can noninvasively track neural connectivity and has great potential for neural scientific research and clinical applications. For each diffusion tensor imaging (DTI) data acquisition scheme, the diffusion tensor is related to the measured apparent diffusion coefficients (ADC) by a transformation matrix. With theoretical analysis we demonstrate that the noise performance of a DTI scheme is dependent on the condition number of the transformation matrix. To test the theoretical framework, we compared the noise performances of different DTI schemes using Monte-Carlo computer simulations and experimental DTI measurements. Both the simulation and the experimental results confirmed that the noise performances of different DTI schemes are significantly correlated with the condition number of the associated transformation matrices. We therefore applied numerical algorithms to optimize a DTI scheme by minimizing the condition number, hence improving the robustness to experimental noise. In the determination of anisotropic diffusion tensors with different orientations, MRI data acquisitions using a single optimum b value based on the mean diffusivity can produce ADC maps with regional differences in noise level. This will give rise to rotational variances of eigenvalues and anisotropy when diffusion tensor mapping is performed using a DTI scheme with a limited number of diffusion-weighting gradient directions. To reduce this type of artifact, a DTI scheme with not only a small condition number but also a large number of evenly distributed diffusion-weighting gradients in 3D is preferable.
Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D
1994-04-01
The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.
Fan Noise Source Diagnostic Test Computation of Rotor Wake Turbulence Noise
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Envia, E.; Thorp, S. A.; Shabbir, A.
2002-01-01
An important source mechanism of fan broadband noise is the interaction of rotor wake turbulence with the fan outlet guide vanes. A broadband noise model that utilizes computed rotor flow turbulence from a RANS code is used to predict fan broadband noise spectra. The noise model is employed to examine the broadband noise characteristics of the 22-inch Source Diagnostic Test fan rig for which broadband noise data were obtained in wind tunnel tests at the NASA Glenn Research Center. A 9-case matrix of three outlet guide vane configurations at three representative fan tip speeds are considered. For all cases inlet and exhaust acoustic power spectra are computed and compared with the measured spectra where possible. In general, the acoustic power levels and shape of the predicted spectra are in good agreement with the measured data. The predicted spectra show the experimentally observed trends with fan tip speed, vane count, and vane sweep. The results also demonstrate the validity of using CFD-based turbulence information for fan broadband noise calculations.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Diffusion tensor imaging using multiple coils for mouse brain connectomics.
Nouls, John C; Badea, Alexandra; Anderson, Robert B J; Cofer, Gary P; Allan Johnson, G
2018-06-01
The correlation between brain connectivity and psychiatric or neurological diseases has intensified efforts to develop brain connectivity mapping techniques on mouse models of human disease. The neural architecture of mouse brain specimens can be shown non-destructively and three-dimensionally by diffusion tensor imaging, which enables tractography, the establishment of a connectivity matrix and connectomics. However, experiments on cohorts of animals can be prohibitively long. To improve throughput in a 7-T preclinical scanner, we present a novel two-coil system in which each coil is shielded, placed off-isocenter along the axis of the magnet and connected to a receiver circuit of the scanner. Preservation of the quality factor of each coil is essential to signal-to-noise ratio (SNR) performance and throughput, because mouse brain specimen imaging at 7 T takes place in the coil-dominated noise regime. In that regime, we show a shielding configuration causing no SNR degradation in the two-coil system. To acquire data from several coils simultaneously, the coils are placed in the magnet bore, around the isocenter, in which gradient field distortions can bias diffusion tensor imaging metrics, affect tractography and contaminate measurements of the connectivity matrix. We quantified the experimental alterations in fractional anisotropy and eigenvector direction occurring in each coil. We showed that, when the coils were placed 12 mm away from the isocenter, measurements of the brain connectivity matrix appeared to be minimally altered by gradient field distortions. Simultaneous measurements on two mouse brain specimens demonstrated a full doubling of the diffusion tensor imaging throughput in practice. Each coil produced images devoid of shading or artifact. To further improve the throughput of mouse brain connectomics, we suggested a future expansion of the system to four coils. To better understand acceptable trade-offs between imaging throughput and connectivity matrix integrity, studies may seek to clarify how measurement variability, post-processing techniques and biological variability impact mouse brain connectomics. Copyright © 2018 John Wiley & Sons, Ltd.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561
NASA Technical Reports Server (NTRS)
Stimpert, D. L.; Clemons, A.
1977-01-01
Sound data which were obtained during tests of a 50.8 cm diameter, subsonic tip speed, low pressure ratio fan were analyzed. The test matrix was divided into two major investigations: (1) source noise reduction techniques; and (2) aft duct noise reduction with acoustic treatment. Source noise reduction techniques were investigated which include minimizing second harmonic noise by varying vane/blade ratio, variation in spacing, and lowering the Mach number through the vane row to lower fan broadband noise. Treatment in the aft duct which includes flow noise effects, faceplate porosity, rotor OGV treatment, slant cell treatment, and splitter simulation with variable depth on the outer wall and constant thickness treatment on the inner wall was investigated. Variable boundary conditions such as variation in treatment panel thickness and orientation, and mixed porosity combined with variable thickness were examined. Significant results are reported.
Noise-induced effects in population dynamics
NASA Astrophysics Data System (ADS)
Spagnolo, Bernardo; Cirone, Markus; La Barbera, Antonino; de Pasquale, Ferdinando
2002-03-01
We investigate the role of noise in the nonlinear relaxation of two ecosystems described by generalized Lotka-Volterra equations in the presence of multiplicative noise. Specifically we study two cases: (i) an ecosystem with two interacting species in the presence of periodic driving; (ii) an ecosystem with a great number of interacting species with random interaction matrix. We analyse the interplay between noise and periodic modulation for case (i) and the role of the noise in the transient dynamics of the ecosystem in the presence of an absorbing barrier in case (ii). We find that the presence of noise is responsible for the generation of temporal oscillations and for the appearance of spatial patterns in the first case. In the other case we obtain the asymptotic behaviour of the time average of the ith population and discuss the effect of the noise on the probability distributions of the population and of the local field.
Decoding Signal Processing at the Single-Cell Level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiley, H. Steven
The ability of cells to detect and decode information about their extracellular environment is critical to generating an appropriate response. In multicellular organisms, cells must decode dozens of signals from their neighbors and extracellular matrix to maintain tissue homeostasis while still responding to environmental stressors. How cells detect and process information from their surroundings through a surprisingly limited number of signal transduction pathways is one of the most important question in biology. Despite many decades of research, many of the fundamental principles that underlie cell signal processing remain obscure. However, in this issue of Cell Systems, Gillies et al presentmore » compelling evidence that the early response gene circuit can act as a linear signal integrator, thus providing significant insight into how cells handle fluctuating signals and noise in their environment.« less
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Material requirements for the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Stephens, Joseph R.; Hecht, Ralph J.; Johnson, Andrew M.
1993-01-01
Under NASA-sponsored High Speed Research (HSR) programs, the materials and processing requirements have been identified for overcoming the environmental and economic barriers of the next generation High Speed Civil Transport (HSCT) propulsion system. The long (2 to 5 hours) supersonic cruise portion of the HSCT cycle will place additional durability requirements on all hot section engine components. Low emissions combustor designs will require high temperature ceramic matrix composite liners to meet an emission goal of less than 5g NO(x) per Kg fuel burned. Large axisymmetric and two-dimensional exhaust nozzle designs are now under development to meet or exceed FAR 36 Stage III noise requirements, and will require lightweight, high temperature metallic, intermetallic, and ceramic matrix composites to reduce nozzle weight and meet structural and acoustic component performance goals. This paper describes and discusses the turbomachinery, combustor, and exhaust nozzle requirements of the High Speed Civil Transport propulsion system.
Measuring attitude with a gradiometer
NASA Technical Reports Server (NTRS)
Sonnabend, David; Gardner, Thomas G.
1994-01-01
This paper explores using a gravity gradiometer to measure the attitude of a satellite, given that the gravity field is accurately known. Since gradiometers actually measure a combination of the gradient and attitude rate and acceleration terms, the answer is far from obvious. The paper demonstrates that it can be done and at microradian accuracy. The technique employed is dynamic estimation, based on the momentum biased Euler equations. The satellite is assumed nominally planet pointed, and subject to control, gravity gradient, and partly radom drag torques. The attitude estimator is unusual. While the standard method of feeding back measurement residuals is used, the feedback gain matrix isn't derived from Kalman theory. instead, it's chosen to minimize a measure of the terminal covariance of the error in the estimate. This depends on the gain matrix and the power spectra of all the process and measurement noises. An integration is required over multiple solutions of Lyapunov equations.
In Situ Raman Analysis of CO₂-Assisted Drying of Fruit-Slices.
Braeuer, Andreas Siegfried; Schuster, Julian Jonathan; Gebrekidan, Medhanie Tesfay; Bahr, Leo; Michelino, Filippo; Zambon, Alessandro; Spilimbergo, Sara
2017-05-15
This work explores the feasibility of applying in situ Raman spectroscopy for the online monitoring of the supercritical carbon dioxide (SC-CO₂) drying of fruits. Specifically, we investigate two types of fruits: mango and persimmon. The drying experiments were carried out inside an optical accessible vessel at 10 MPa and 313 K. The Raman spectra reveal: (i) the reduction of the water from the fruit slice and (ii) the change of the fruit matrix structure during the drying process. Two different Raman excitation wavelengths were compared: 532 nm and 785 nm. With respect to the quality of the obtained spectra, the 532 nm excitation wavelength was superior due to a higher signal-to-noise ratio and due to a resonant excitation scheme of the carotenoid molecules. It was found that the absorption of CO₂ into the fruit matrix enhances the extraction of water, which was expressed by the obtained drying kinetic curve.
The open quantum Brownian motions
NASA Astrophysics Data System (ADS)
Bauer, Michel; Bernard, Denis; Tilloy, Antoine
2014-09-01
Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises
NASA Astrophysics Data System (ADS)
Hoffmann, Thomas; Dorrestein, Pieter C.
2015-11-01
Matrix deposition on agar-based microbial colonies for MALDI imaging mass spectrometry is often complicated by the complex media on which microbes are grown. This Application Note demonstrates how consecutive short spray pulses of a matrix solution can form an evenly closed matrix layer on dried agar. Compared with sieving dry matrix onto wet agar, this method supports analyte cocrystallization, which results in significantly more signals, higher signal-to-noise ratios, and improved ionization efficiency. The even matrix layer improves spot-to-spot precision of measured m/z values when using TOF mass spectrometers. With this technique, we established reproducible imaging mass spectrometry of myxobacterial cultures on nutrient-rich cultivation media, which was not possible with the sieving technique.
Richardson, Stacie L.; Hanjra, Pahul; Zhang, Gang; Mackie, Brianna D.; Peterson, Darrell L.; Huang, Rong
2016-01-01
Protein methylation and acetylation play important roles in biological processes, and misregulation of these modifications is involved in various diseases. Therefore, it is critical to understand the activities of the enzymes responsible for these modifications. Herein we describe a sensitive method for ratiometric quantification of methylated and acetylated peptides via MALDI-MS by direct spotting of enzymatic methylation and acetylation reaction mixtures without tedious purification procedures. The quantifiable detection limit for peptides with our method is approximately 10 fmol. This is achieved by increasing the signal-to-noise ratio through the addition of NH4H2PO4 to the matrix solution and reduction of the matrix α-cyanohydroxycinnamic acid concentration to 2 mg/ml. We have demonstrated the application of this method in enzyme kinetic analysis and inhibition studies. The unique feature of this method is the simultaneous quantification of multiple peptide species for investigation of processivity mechanisms. Its wide buffer compatibility makes it possible to be adapted to investigate the activity of any protein methyltransferase or acetyltransferase. PMID:25778392
A new DOD and DOA estimation method for MIMO radar
NASA Astrophysics Data System (ADS)
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2018-04-01
The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.
Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki
2015-08-01
A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.
SMI adaptive antenna arrays for weak interfering signals
NASA Technical Reports Server (NTRS)
Gupta, I. J.
1987-01-01
The performance of adaptive antenna arrays is studied when a sample matrix inversion (SMI) algorithm is used to control array weights. It is shown that conventional SMI adaptive antennas, like other adaptive antennas, are unable to suppress weak interfering signals (below thermal noise) encountered in broadcasting satellite communication systems. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is modified such that the effect of thermal noise on the weights of the adaptive array is reduced. Thus, the weights are dictated by relatively weak coherent signals. It is shown that the modified algorithm provides the desired interference protection. The use of defocused feeds as auxiliary elements of an SMI adaptive array is also discussed.
Multiphoton Scattering Tomography with Coherent States.
Ramos, Tomás; García-Ripoll, Juan José
2017-10-13
In this work we develop an experimental procedure to interrogate the single- and multiphoton scattering matrices of an unknown quantum system interacting with propagating photons. Our proposal requires coherent state laser or microwave inputs and homodyne detection at the scatterer's output, and provides simultaneous information about multiple-elastic and inelastic-segments of the scattering matrix. The method is resilient to detector noise and its errors can be made arbitrarily small by combining experiments at various laser powers. Finally, we show that the tomography of scattering has to be performed using pulsed lasers to efficiently gather information about the nonlinear processes in the scatterer.
NASA Technical Reports Server (NTRS)
Yang, J. C. S.; Tsui, C. Y.
1977-01-01
Elastic wave propagation and attenuation in a model fiber matrix was investigated. Damping characteristics in graphite epoxy composite materials were measured. A sound transmission test facility suitable to incorporate into NASA Ames wind tunnel for measurement of transmission loss due to sound generation in boundary layers was constructed. Measurement of transmission loss of graphite epoxy composite panels was also included.
SU-E-I-09: The Impact of X-Ray Scattering On Image Noise for Dedicated Breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, K; Gazi, P; Boone, J
2015-06-15
Purpose: To quantify the impact of detected x-ray scatter on image noise in flat panel based dedicated breast CT systems and to determine the optimal scanning geometry given practical trade-offs between radiation dose and scatter reduction. Methods: Four different uniform polyethylene cylinders (104, 131, 156, and 184 mm in diameter) were scanned as the phantoms on a dedicated breast CT scanner developed in our laboratory. Both stationary projection imaging and rotational cone-beam CT imaging was performed. For each acquisition type, three different x-ray beam collimations were used (12, 24, and 109 mm measured at isocenter). The aim was to quantifymore » image noise properties (pixel variance, SNR, and image NPS) under different levels of x-ray scatter, in order to optimize the scanning geometry. For both projection images and reconstructed CT images, individual pixel variance and NPS were determined and compared. Noise measurement from the CT images were also performed with different detector binning modes and reconstruction matrix sizes. Noise propagation was also tracked throughout the intermediate steps of cone-beam CT reconstruction, including the inverse-logarithmic process, Fourier-filtering before backprojection. Results: Image noise was lower in the presence of higher scatter levels. For the 184 mm polyethylene phantom, the image noise (measured in pixel variance) was ∼30% lower with full cone-beam acquisition compared to a narrow (12 mm) fan-beam acquisition. This trend is consistent across all phantom sizes and throughout all steps of CT image reconstruction. Conclusion: From purely a noise perspective, the cone-beam geometry (i.e. the full cone-angle acquisition) produces lower image noise compared to the lower-scatter fan-beam acquisition for breast CT. While these results are relevant in homogeneous phantoms, the full impact of scatter on noise in bCT should involve contrast-to-noise-ratio measurements in heterogeneous phantoms if the goal is to optimize the scanning geometry for dedicated breast CT. This work was supported by a grant from the National Institute for Biomedical Imaging and Bioengineering (R01 EB002138)« less
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays
NASA Astrophysics Data System (ADS)
Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.
2014-12-01
Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yongzheng, E-mail: yzsung@gmail.com; Li, Wang; Zhao, Donghua
In this paper, we propose a new consensus model in which the interactions among agents stochastically switch between attraction and repulsion. Such a positive-and-negative mechanism is described by the white-noise-based coupling. Analytic criteria for the consensus and non-consensus in terms of the eigenvalues of the noise intensity matrix are derived, which provide a better understanding of the constructive roles of random interactions. Specifically, we discover a positive role of noise coupling that noise can accelerate the emergence of consensus. We find that the converging speed of the multi-agent network depends on the square of the second smallest eigenvalue of itsmore » graph Laplacian. The influence of network topologies on the consensus time is also investigated.« less
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
Hoffmann, Thomas; Dorrestein, Pieter C
2015-11-01
Matrix deposition on agar-based microbial colonies for MALDI imaging mass spectrometry is often complicated by the complex media on which microbes are grown. This Application Note demonstrates how consecutive short spray pulses of a matrix solution can form an evenly closed matrix layer on dried agar. Compared with sieving dry matrix onto wet agar, this method supports analyte cocrystallization, which results in significantly more signals, higher signal-to-noise ratios, and improved ionization efficiency. The even matrix layer improves spot-to-spot precision of measured m/z values when using TOF mass spectrometers. With this technique, we established reproducible imaging mass spectrometry of myxobacterial cultures on nutrient-rich cultivation media, which was not possible with the sieving technique. Graphical Abstract ᅟ.
NASA Technical Reports Server (NTRS)
Kiser, J. Douglas; Bansal, Narottam P.; Szelagowski, James; Sokhey, Jagdish; Heffernan, Tab; Clegg, Joseph; Pierluissi, Anthony; Riedell, Jim; Wyen, Travis; Atmur, Steven;
2015-01-01
LibertyWorks®, a subsidiary of Rolls-Royce Corporation, first studied CMC (ceramic matrix composite) exhaust mixers for potential weight benefits in 2008. Oxide CMC potentially offered weight reduction, higher temperature capability, and the ability to fabricate complex-shapes for increased mixing and noise suppression. In 2010, NASA was pursuing the reduction of NOx emissions, fuel burn, and noise from turbine engines in Phase I of the Environmentally Responsible Aviation (ERA) Project (within the Integrated Systems Research Program). ERA subtasks, including those focused on CMC components, were being formulated with the goal of maturing technology from Proof of Concept Validation (Technology Readiness Level 3 (TRL 3)) to System/Subsystem or Prototype Demonstration in a Relevant Environment (TRL 6). LibertyWorks®, a subsidiary of Rolls-Royce Corporation, first studied CMC (ceramic matrix composite) exhaust mixers for potential weight benefits in 2008. Oxide CMC potentially offered weight reduction, higher temperature capability, and the ability to fabricate complex-shapes for increased mixing and noise suppression. In 2010, NASA was pursuing the reduction of NOx emissions, fuel burn, and noise from turbine engines in Phase I of the Environmentally Responsible Aviation (ERA) Project (within the Integrated Systems Research Program). ERA subtasks, including those focused on CMC components, were being formulated with the goal of maturing technology from Proof of Concept Validation (Technology Readiness Level 3 (TRL 3)) to System/Subsystem or Prototype Demonstration in a Relevant Environment (TRL 6). Oxide CMC component at both room and elevated temperatures. A TRL˜5 (Component Validation in a Relevant Environment) was attained and the CMC mixer was cleared for ground testing on a Rolls-Royce AE3007 engine for performance evaluation to achieve TRL 6.
Group identification in Indonesian stock market
NASA Astrophysics Data System (ADS)
Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong
2016-08-01
The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Heyer, Nicholas; Morata, Thais C; Pinkerton, Lynne E; Brueck, Scott E; Stancescu, Daniel; Panaccio, Mary Prince; Kim, Hyoshin; Sinclair, J Stephen; Waters, Martha A; Estill, Cherie F; Franks, John R
2011-07-01
To evaluate the effectiveness of hearing conservation programs (HCP) and their specific components in reducing noise-induced hearing loss (NIHL). This retrospective cohort study was conducted at one food-processing plant and two automotive plants. Audiometric and work-history databases were combined with historical noise monitoring data to develop a time-dependent exposure matrix for each plant. Historical changes in production and HCP implementation were collected from company records, employee interviews and focus groups. These data were used to develop time-dependent quality assessments for various HCP components. 5478 male (30,427 observations) and 1005 female (5816 observations) subjects were included in the analysis. Analyses were conducted separately for males and females. Females tended to have less NIHL at given exposure levels than males. Duration of noise exposure stratified by intensity (dBA) was a better predictor of NIHL than the standard equivalent continuous noise level (L(eq)) based upon a 3-dBA exchange. Within this cohort, efficient dBA strata for males were <95 versus ≥ 95, and for females <90 versus ≥ 90. The reported enforced use of hearing protection devices (HPDs) significantly reduced NIHL. The data did not have sufficient within-plant variation to determine the effectiveness of noise monitoring or worker training. An association between increased audiometric testing and NIHL was believed to be an artifact of increased participation in screening. Historical audiometric data combined with noise monitoring data can be used to better understand the effectiveness of HCPs. Regular collection and maintenance of quality data should be encouraged and used to monitor the effectiveness of these interventions.
NASA Astrophysics Data System (ADS)
Lardner, Timothy; Li, Minghui; Gachagan, Anthony
2014-02-01
Materials with a coarse grain structure are becoming increasingly prevalent in industry due to their resilience to stress and corrosion. These materials are difficult to inspect with ultrasound because reflections from the grains lead to high noise levels which hinder the echoes of interest. Spatially Averaged Sub-Aperture Correlation Imaging (SASACI) is an advanced array beamforming technique that uses the cross-correlation between images from array sub-apertures to generate an image weighting matrix, in order to reduce noise levels. This paper presents a method inspired by SASACI to further improve imaging using phase information to refine focusing and reduce noise. A-scans from adjacent array elements are cross-correlated using both signal amplitude and phase to refine delay laws and minimize phase aberration. The phase-based and amplitude-based corrected images are used as inputs to a two-dimensional cross-correlation algorithm that will output a weighting matrix that can be applied to any conventional image. This approach was validated experimentally using a 5MHz array a coarse grained Inconel 625 step wedge, and compared to the Total Focusing Method (TFM). Initial results have seen SNR improvements of over 20dB compared to TFM, and a resolution that is much higher.
NASA's high-temperature engine materials program for civil aeronautics
NASA Technical Reports Server (NTRS)
Gray, Hugh R.; Ginty, Carol A.
1992-01-01
The Advanced High-Temperature Engine Materials Technology Program is described in terms of its research initiatives and its goal of developing propulsion systems for civil aeronautics with low levels of noise, pollution, and fuel consumption. The program emphasizes the analysis and implementation of structural materials such as polymer-matrix composites in fans, casings, and engine-control systems. Also investigated in the program are intermetallic- and metal-matrix composites for uses in compressors and turbine disks as well as ceramic-matrix composites for extremely high-temperature applications such as turbine vanes.
Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates
NASA Astrophysics Data System (ADS)
Rodionov, Andrey
An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.
Haney, Matthew M.; Mikesell, T. Dylan; van Wijk, Kasper; Nakahara, Hisashi
2012-01-01
Using ambient seismic noise for imaging subsurface structure dates back to the development of the spatial autocorrelation (SPAC) method in the 1950s. We present a theoretical analysis of the SPAC method for multicomponent recordings of surface waves to determine the complete 3 × 3 matrix of correlations between all pairs of three-component motions, called the correlation matrix. In the case of isotropic incidence, when either Rayleigh or Love waves arrive from all directions with equal power, the only non-zero off-diagonal terms in the matrix are the vertical–radial (ZR) and radial–vertical (RZ) correlations in the presence of Rayleigh waves. Such combinations were not considered in the development of the SPAC method. The method originally addressed the vertical–vertical (ZZ), RR and TT correlations, hence the name spatial autocorrelation. The theoretical expressions we derive for the ZR and RZ correlations offer additional ways to measure Rayleigh wave dispersion within the SPAC framework. Expanding on the results for isotropic incidence, we derive the complete correlation matrix in the case of generally anisotropic incidence. We show that the ZR and RZ correlations have advantageous properties in the presence of an out-of-plane directional wavefield compared to ZZ and RR correlations. We apply the results for mixed-component correlations to a data set from Akutan Volcano, Alaska and find consistent estimates of Rayleigh wave phase velocity from ZR compared to ZZ correlations. This work together with the recently discovered connections between the SPAC method and time-domain correlations of ambient noise provide further insights into the retrieval of surface wave Green’s functions from seismic noise.
Donoho, David L; Gavish, Matan; Montanari, Andrea
2013-05-21
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
NASA Astrophysics Data System (ADS)
Le, Huy Xuan; Matunaga, Saburo
2014-12-01
This paper presents an adaptive unscented Kalman filter (AUKF) to recover the satellite attitude in a fault detection and diagnosis (FDD) subsystem of microsatellites. The FDD subsystem includes a filter and an estimator with residual generators, hypothesis tests for fault detections and a reference logic table for fault isolations and fault recovery. The recovery process is based on the monitoring of mean and variance values of each attitude sensor behaviors from residual vectors. In the case of normal work, the residual vectors should be in the form of Gaussian white noise with zero mean and fixed variance. When the hypothesis tests for the residual vectors detect something unusual by comparing the mean and variance values with dynamic thresholds, the AUKF with real-time updated measurement noise covariance matrix will be used to recover the sensor faults. The scheme developed in this paper resolves the problem of the heavy and complex calculations during residual generations and therefore the delay in the isolation process is reduced. The numerical simulations for TSUBAME, a demonstration microsatellite of Tokyo Institute of Technology, are conducted and analyzed to demonstrate the working of the AUKF and FDD subsystem.
Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation
Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...
2015-06-01
Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less
An adaptive filter method for spacecraft using gravity assist
NASA Astrophysics Data System (ADS)
Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam
2015-04-01
Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.
Infrared image enhancement based on the edge detection and mathematical morphology
NASA Astrophysics Data System (ADS)
Zhang, Linlin; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Yu, Xiaomei; Hui, Mei; Chu, Xuhong; Gong, Cheng
2010-11-01
The development of the un-cooled infrared imaging technology from military necessity. At present, It is widely applied in industrial, medicine, scientific and technological research and so on. The infrared radiation temperature distribution of the measured object's surface can be observed visually. The collection of infrared images from our laboratory has following characteristics: Strong spatial correlation, Low contrast , Poor visual effect; Without color or shadows because of gray image , and has low resolution; Low definition compare to the visible light image; Many kinds of noise are brought by the random disturbances of the external environment. Digital image processing are widely applied in many areas, it can now be studied up close and in detail in many research field. It has become one kind of important means of the human visual continuation. Traditional methods for image enhancement cannot capture the geometric information of images and tend to amplify noise. In order to remove noise and improve visual effect. Meanwhile, To overcome the above enhancement issues. The mathematical model of FPA unit was constructed based on matrix transformation theory. According to characteristics of FPA, Image enhancement algorithm which combined with mathematical morphology and edge detection are established. First of all, Image profile is obtained by using the edge detection combine with mathematical morphological operators. And then, through filling the template profile by original image to get the ideal background image, The image noise can be removed on the base of the above method. The experiments show that utilizing the proposed algorithm can enhance image detail and the signal to noise ratio.
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2015-08-01
The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-08-01
We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space-time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.
NASA Technical Reports Server (NTRS)
Becker, Joseph F.; Valentin, Jose
1996-01-01
The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.
Maximum likelihood techniques applied to quasi-elastic light scattering
NASA Technical Reports Server (NTRS)
Edwards, Robert V.
1992-01-01
There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.
Algorithms for System Identification and Source Location.
NASA Astrophysics Data System (ADS)
Nehorai, Arye
This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.
Fabricating Composite-Material Structures Containing SMA Ribbons
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Cano, Roberto J.; Lach, Cynthia L.
2003-01-01
An improved method of designing and fabricating laminated composite-material (matrix/fiber) structures containing embedded shape-memory-alloy (SMA) actuators has been devised. Structures made by this method have repeatable, predictable properties, and fabrication processes can readily be automated. Such structures, denoted as shape-memory-alloy hybrid composite (SMAHC) structures, have been investigated for their potential to satisfy requirements to control the shapes or thermoelastic responses of themselves or of other structures into which they might be incorporated, or to control noise and vibrations. Much of the prior work on SMAHC structures has involved the use SMA wires embedded within matrices or within sleeves through parent structures. The disadvantages of using SMA wires as the embedded actuators include (1) complexity of fabrication procedures because of the relatively large numbers of actuators usually needed; (2) sensitivity to actuator/ matrix interface flaws because voids can be of significant size, relative to wires; (3) relatively high rates of breakage of actuators during curing of matrix materials because of sensitivity to stress concentrations at mechanical restraints; and (4) difficulty of achieving desirable overall volume fractions of SMA wires when trying to optimize the integration of the wires by placing them in selected layers only.
Distributed Matrix Completion: Applications to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown...computing the leading eigenvectors of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying...generalization of gossip algorithms for consensus. The algorithms outperform state-of-the-art methods in a communication-limited scenario. Positioning via
IRAC Full-Scale Flight Testbed Capabilities
NASA Technical Reports Server (NTRS)
Lee, James A.; Pahle, Joseph; Cogan, Bruce R.; Hanson, Curtis E.; Bosworth, John T.
2009-01-01
Overview: Provide validation of adaptive control law concepts through full scale flight evaluation in a representative avionics architecture. Develop an understanding of aircraft dynamics of current vehicles in damaged and upset conditions Real-world conditions include: a) Turbulence, sensor noise, feedback biases; and b) Coupling between pilot and adaptive system. Simulated damage includes 1) "B" matrix (surface) failures; and 2) "A" matrix failures. Evaluate robustness of control systems to anticipated and unanticipated failures.
Discovering cell types in flow cytometry data with random matrix theory
NASA Astrophysics Data System (ADS)
Shen, Yang; Nussenblatt, Robert; Losert, Wolfgang
Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. Department of Physics, University of Maryland, College Park, MD 20742.
Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.
Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe
2015-01-01
The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies ≥300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise. The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
NASA Astrophysics Data System (ADS)
Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan
2018-01-01
This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.
Novel Digital Driving Method Using Dual Scan for Active Matrix Organic Light-Emitting Diode Displays
NASA Astrophysics Data System (ADS)
Jung, Myoung Hoon; Choi, Inho; Chung, Hoon-Ju; Kim, Ohyun
2008-11-01
A new digital driving method has been developed for low-temperature polycrystalline silicon, transistor-driven, active-matrix organic light-emitting diode (AM-OLED) displays by time-ratio gray-scale expression. This driving method effectively increases the emission ratio and the number of subfields by inserting another subfield set into nondisplay periods in the conventional digital driving method. By employing the proposed modified gravity center coding, this method can be used to effectively compensate for dynamic false contour noise. The operation and performance were verified by current measurement and image simulation. The simulation results using eight test images show that the proposed approach improves the average peak signal-to-noise ratio by 2.61 dB, and the emission ratio by 20.5%, compared with the conventional digital driving method.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
Brockmeyer, Alison M; Potts, Lisa G
2011-02-01
Difficulty understanding in background noise is a common complaint of cochlear implant (CI) recipients. Programming options are available to improve speech recognition in noise for CI users including automatic dynamic range optimization (ADRO), autosensitivity control (ASC), and a two-stage adaptive beamforming algorithm (BEAM). However, the processing option that results in the best speech recognition in noise is unknown. In addition, laboratory measures of these processing options often show greater degrees of improvement than reported by participants in everyday listening situations. To address this issue, Compton-Conley and colleagues developed a test system to replicate a restaurant environment. The R-SPACE™ consists of eight loudspeakers positioned in a 360 degree arc and utilizes a recording made at a restaurant of background noise. The present study measured speech recognition in the R-SPACE with four processing options: standard dual-port directional (STD), ADRO, ASC, and BEAM. A repeated-measures, within-subject design was used to evaluate the four different processing options at two noise levels. Twenty-seven unilateral and three bilateral adult Nucleus Freedom CI recipients. The participants' everyday program (with no additional processing) was used as the STD program. ADRO, ASC, and BEAM were added individually to the STD program to create a total of four programs. Participants repeated Hearing in Noise Test sentences presented at 0 degrees azimuth with R-SPACE restaurant noise at two noise levels, 60 and 70 dB SPL. The reception threshold for sentences (RTS) was obtained for each processing condition and noise level. In 60 dB SPL noise, BEAM processing resulted in the best RTS, with a significant improvement over STD and ADRO processing. In 70 dB SPL noise, ASC and BEAM processing had significantly better mean RTSs compared to STD and ADRO processing. Comparison of noise levels showed that STD and BEAM processing resulted in significantly poorer RTSs in 70 dB SPL noise compared to the performance with these processing conditions in 60 dB SPL noise. Bilateral participants demonstrated a bilateral improvement compared to the better monaural condition for both noise levels and all processing conditions, except ASC in 60 dB SPL noise. The results of this study suggest that the use of processing options that utilize noise reduction, like those available in ASC and BEAM, improve a CI recipient's ability to understand speech in noise in listening situations similar to those experienced in the real world. The choice of the best processing option is dependent on the noise level, with BEAM best at moderate noise levels and ASC best at loud noise levels for unilateral CI recipients. Therefore, multiple noise programs or a combination of processing options may be necessary to provide CI users with the best performance in a variety of listening situations. American Academy of Audiology.
An experimental SMI adaptive antenna array simulator for weak interfering signals
NASA Technical Reports Server (NTRS)
Dilsavor, Ronald S.; Gupta, Inder J.
1991-01-01
An experimental sample matrix inversion (SMI) adaptive antenna array for suppressing weak interfering signals is described. The experimental adaptive array uses a modified SMI algorithm to increase the interference suppression. In the modified SMI algorithm, the sample covariance matrix is redefined to reduce the effect of thermal noise on the weights of an adaptive array. This is accomplished by subtracting a fraction of the smallest eigenvalue of the original covariance matrix from its diagonal entries. The test results obtained using the experimental system are compared with theoretical results. The two show a good agreement.
NASA Astrophysics Data System (ADS)
Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.
2014-09-01
ePix100 is the first variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. ePix100 is optimized for ultra-low noise application requiring high spatial resolution. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix100 variant has 50μmx50μm pixels arranged in a 352x384 matrix, a resolution of 50e- r.m.s. and a signal range of 35fC (100 photons at 8keV). In its final version it will be able to sustain a frame rate of 1kHz. A first prototype has been fabricated and characterized and the measurement results are reported here.
NASA Astrophysics Data System (ADS)
Luce, R.; Hildebrandt, P.; Kuhlmann, U.; Liesen, J.
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for non-negative matrix factorization which is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed.
In Situ Raman Analysis of CO2—Assisted Drying of Fruit-Slices
Braeuer, Andreas Siegfried; Schuster, Julian Jonathan; Gebrekidan, Medhanie Tesfay; Bahr, Leo; Michelino, Filippo; Zambon, Alessandro; Spilimbergo, Sara
2017-01-01
This work explores the feasibility of applying in situ Raman spectroscopy for the online monitoring of the supercritical carbon dioxide (SC-CO2) drying of fruits. Specifically, we investigate two types of fruits: mango and persimmon. The drying experiments were carried out inside an optical accessible vessel at 10 MPa and 313 K. The Raman spectra reveal: (i) the reduction of the water from the fruit slice and (ii) the change of the fruit matrix structure during the drying process. Two different Raman excitation wavelengths were compared: 532 nm and 785 nm. With respect to the quality of the obtained spectra, the 532 nm excitation wavelength was superior due to a higher signal-to-noise ratio and due to a resonant excitation scheme of the carotenoid molecules. It was found that the absorption of CO2 into the fruit matrix enhances the extraction of water, which was expressed by the obtained drying kinetic curve. PMID:28505120
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Fan, Jiahua; Dallas, William J.; Krupinski, Elizabeth A.; Johnson, Jeffrey
2009-08-01
This presentation describes work in progress that is the result of an NIH SBIR Phase 1 project that addresses the wide- spread concern for the large number of breast-cancers and cancer victims [1,2]. The primary goal of the project is to increase the detection rate of microcalcifications as a result of the decrease of spatial noise of the LCDs used to display the mammograms [3,4]. Noise reduction is to be accomplished with the aid of a high performance CCD camera and subsequent application of local-mean equalization and error diffusion [5,6]. A second goal of the project is the actual detection of breast cancer. Contrary to the approach to mammography, where the mammograms typically have a pixel matrix of approximately 1900 x 2300 pixels, otherwise known as FFDM or Full-Field Digital Mammograms, we will only use sections of mammograms with a pixel matrix of 256 x 256 pixels. This is because at this time, reduction of spatial noise on an LCD can only be done on relatively small areas like 256 x 256 pixels. In addition, judging the efficacy for detection of breast cancer will be done using two methods: One is a conventional ROC study [7], the other is a vision model developed over several years starting at the Sarnoff Research Center and continuing at the Siemens Corporate Research in Princeton NJ [8].
Robust and intelligent bearing estimation
Claassen, John P.
2000-01-01
A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.
Principal component analysis for designed experiments.
Konishi, Tomokazu
2015-01-01
Principal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes. Together, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.
Global Tropospheric Noise Maps for InSAR Observations
NASA Astrophysics Data System (ADS)
Yun, S. H.; Hensley, S.; Agram, P. S.; Chaubell, M.; Fielding, E. J.; Pan, L.
2014-12-01
Radio wave's differential phase delay variation through the troposphere is the largest error sources in Interferometric Synthetic Aperture Radar (InSAR) measurements, and water vapor variability in the troposphere is known to be the dominant factor. We use the precipitable water vapor (PWV) products from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors mounted on Terra and Aqua satellites to produce tropospheric noise maps of InSAR. We estimate the slope and y-intercept of power spectral density curve of MODIS PWV and calculate the structure function to estimate the expected tropospheric noise level as a function of distance. The results serve two purposes: 1) to provide guidance on the expected covariance matrix for geophysical modeling, 2) to provide quantitative basis for the science Level-1 requirements of the planned NASA-ISRO L-band SAR mission (NISAR mission). We populate lookup tables of such power spectrum parameters derived from each 1-by-1 degree tile of global coverage. The MODIS data were retrieved from OSCAR (Online Services for Correcting Atmosphere in Radar) server. Users will be able to use the lookup tables and calculate expected tropospheric noise level of any date of MODIS data at any distance scale. Such calculation results can be used for constructing covariance matrix for geophysical modeling, or building statistics to support InSAR missions' requirements. For example, about 74% of the world had InSAR tropospheric noise level (along a radar line-of-sight for an incidence angle of 40 degrees) of 2 cm or less at 50 km distance scale during the time period of 2010/01/01 - 2010/01/09.
Silvestre, Daphné; Cavanagh, Patrick; Arleo, Angelo; Allard, Rémy
2017-02-01
External noise paradigms are widely used to characterize sensitivity by comparing the effect of a variable on contrast threshold when it is limited by internal versus external noise. A basic assumption of external noise paradigms is that the processing properties are the same in low and high noise. However, recent studies (e.g., Allard & Cavanagh, 2011; Allard & Faubert, 2014b) suggest that this assumption could be violated when using spatiotemporally localized noise (i.e., appearing simultaneously and at the same location as the target) but not when using spatiotemporally extended noise (i.e., continuously displayed, full-screen, dynamic noise). These previous findings may have been specific to the crowding and 0D noise paradigms that were used, so the purpose of the current study is to test if this violation of noise-invariant processing also occurs in a standard contrast detection task in white noise. The rationale of the current study is that local external noise triggers the use of recognition rather than detection and that a recognition process should be more affected by uncertainty about the shape of the target than one involving detection. To investigate the contribution of target knowledge on contrast detection, the effect of orientation uncertainty was evaluated for a contrast detection task in the absence of noise and in the presence of spatiotemporally localized or extended noise. A larger orientation uncertainty effect was observed with temporally localized noise than with temporally extended noise or with no external noise, indicating a change in the nature of the processing for temporally localized noise. We conclude that the use of temporally localized noise in external noise paradigms risks triggering a shift in process, invalidating the noise-invariant processing required for the paradigm. If, instead, temporally extended external noise is used to match the properties of internal noise, no such processing change occurs.
Singular value decomposition utilizing parallel algorithms on graphical processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, Charlotte W; Barhen, Jacob
2011-01-01
One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less
NASA Technical Reports Server (NTRS)
Hill, G. A.; Brown, S. A.; Geiselhart, K. A.
2004-01-01
This paper summarizes the results of studies undertaken to investigate revolutionary propulsion-airframe configurations that have the potential to achieve significant noise reductions over present-day commercial transport aircraft. Using a 300 passenger Blended-Wing-Body (BWB) as a baseline, several alternative low-noise propulsion-airframe-aeroacoustic (PAA) technologies and design concepts were investigated both for their potential to reduce the overall BWB noise levels, and for their impact on the weight, performance, and cost of the vehicle. Two evaluation frameworks were implemented for the assessments. The first was a Multi-Attribute Decision Making (MADM) process that used a Pugh Evaluation Matrix coupled with the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). This process provided a qualitative evaluation of the PAA technologies and design concepts and ranked them based on how well they satisfied chosen design requirements. From the results of the evaluation, it was observed that almost all of the PAA concepts gave the BWB a noise benefit, but degraded its performance. The second evaluation framework involved both deterministic and probabilistic systems analyses that were performed on a down-selected number of BWB propulsion configurations incorporating the PAA technologies and design concepts. These configurations included embedded engines with Boundary Layer Ingesting Inlets, Distributed Exhaust Nozzles installed on podded engines, a High Aspect Ratio Rectangular Nozzle, Distributed Propulsion, and a fixed and retractable aft airframe extension. The systems analyses focused on the BWB performance impacts of each concept using the mission range as a measure of merit. Noise effects were also investigated when enough information was available for a tractable analysis. Some tentative conclusions were drawn from the results. One was that the Boundary Layer Ingesting Inlets provided improvements to the BWB's mission range, by increasing the propulsive efficiency at cruise, and therefore offered a means to offset performance penalties imposed by some of the advanced PAA configurations. It was also found that the podded Distributed Exhaust Nozzle configuration imposed high penalties on the mission range and the need for substantial synergistic performance enhancements from an advanced integration scheme was identified. The High Aspect Ratio Nozzle showed inconclusive noise results and posed significant integration difficulties. Distributed Propulsion, in general, imposed performance penalties but may offer some promise for noise reduction from jet-to-jet shielding effects. Finally, a retractable aft airframe extension provided excellent noise reduction for a modest decrease in range.
NASA Astrophysics Data System (ADS)
Sun, Xiucong; Han, Chao; Chen, Pei
2017-10-01
Spaceborne Global Positioning System (GPS) receivers are widely used for orbit determination of low-Earth-orbiting (LEO) satellites. With the improvement of measurement accuracy, single-frequency receivers are recently considered for low-cost small satellite missions. In this paper, a Schmidt-Kalman filter which processes single-frequency GPS measurements and broadcast ephemerides is proposed for real-time precise orbit determination of LEO satellites. The C/A code and L1 phase are linearly combined to eliminate the first-order ionospheric effects. Systematic errors due to ionospheric delay residual, group delay variation, phase center variation, and broadcast ephemeris errors, are lumped together into a noise term, which is modeled as a first-order Gauss-Markov process. In order to reduce computational complexity, the colored noise is considered rather than estimated in the orbit determination process. This ensures that the covariance matrix accurately represents the distribution of estimation errors without increasing the dimension of the state vector. The orbit determination algorithm is tested with actual flight data from the single-frequency GPS receiver onboard China's small satellite Shi Jian-9A (SJ-9A). Preliminary results using a 7-h data arc on October 25, 2012 show that the Schmidt-Kalman filter performs better than the standard Kalman filter in terms of accuracy.
Image-based modeling of the flow transition from a Berea rock matrix to a propped fracture
NASA Astrophysics Data System (ADS)
Sanematsu, P.; Willson, C. S.; Thompson, K. E.
2013-12-01
In the past decade, new technologies and advances in horizontal hydraulic fracturing to extract oil and gas from tight rocks have raised questions regarding the physics of the flow and transport processes that occur during production. Many of the multi-dimensional details of flow from the rock matrix into the fracture and within the proppant-filled fracture are still unknown, which leads to unreliable well production estimations. In this work, we use x-ray computed micro tomography (XCT) to image 30/60 CarboEconoprop light weight ceramic proppant packed between berea sandstone cores (6 mm in diameter and ~2 mm in height) under 4000 psi (~28 MPa) loading stress. Image processing and segmentation of the 6 micron voxel resolution tomography dataset into solid and void space involved filtering with anisotropic diffusion (AD), segmentation using an indicator kriging (IK) algorithm, and removal of noise using a remove islands and holes program. Physically-representative pore network structures were generated from the XCT images, and a representative elementary volume (REV) was analyzed using both permeability and effective porosity convergence. Boundary conditions were introduced to mimic the flow patterns that occur when fluid moves from the matrix into the proppant-filled fracture and then downstream within the proppant-filled fracture. A smaller domain, containing Berea and proppants close to the interface, was meshed using an in-house unstructured meshing algorithm that allows different levels of refinement. Although most of this domain contains proppants, the Berea section accounted for the majority of the elements due to mesh refinement in this region of smaller pores. A finite element method (FEM) Stokes flow model was used to provide more detailed insights on the flow transition from rock matrix to fracture. Results using different pressure gradients are used to describe the flow transition from the Berea rock matrix to proppant-filled fracture.
Mitigation of Faraday rotation in ALOS-2/PALSAR-2 full polarimetric SAR imageries
NASA Astrophysics Data System (ADS)
Mohanty, Shradha; Singh, Gulab
2016-05-01
The ionosphere, which extends from 50-450 kms in earth's atmosphere, is a particularly important region with regards electromagnetic wave propagation and radio communications in the L-band and lower frequencies. These ions interact with the traversing electromagnetic wave and cause rotation of polarization of the radar signal. In this paper, a potentially computable method for quantifying Faraday rotation (FR), is discussed with the knowledge of full polarimetric ALOS/PALSAR data and ALOS-2/PALSAR-2 data. For a well calibrated monostatic, full-pol ALOS-2/PALSAR-2 data, the reciprocal symmetry of the received scattering matrix is violated due to FR. Apart from FR, other system parameters like residual system noise, channel amplitude, phase imbalance and cross-talk, also account for the non-symmetry. To correct for the FR effect, firstly the noise correction was performed. PALSAR/PALSAR-2 data was converted into 4×4 covariance matrix to calculate the coherence between cross-polarized elements. Covariance matrix was modified by the coherence factor. For FR corrections, the covariance matrix was converted into 4×4 coherency matrix. The elements of coherency matrix were used to estimate FR angle and correct for FR. Higher mean FR values during ALOS-PALSAR measurements can be seen in regions nearer to the equator and the values gradually decrease with increase in latitude. Moreover, temporal variations in FR can also be noticed over different years (2006-2010), with varying sunspot activities for the Niigata, Japan test site. With increasing sunspot activities expected during ALOS-2/PALSAR-2 observations, more striping effects were observed over Mumbai, India. This data has also been FR corrected, with mean FR values of about 8°, using the above mentioned technique.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
NASA Technical Reports Server (NTRS)
Turner, B. Curtis
1992-01-01
A method is developed for prediction of ozone levels in planetary atmospheres. This method is formulated in terms of error covariance matrices, and is associated with both direct measurements, a priori first guess profiles, and a weighting function matrix. This is described by the following linearized equation: y = A(matrix) x X + eta, where A is the weighting matrix and eta is noise. The problems to this approach are: (1) the A matrix is near singularity; (2) the number of unknowns in the profile exceeds the number of data points, therefore, the solution may not be unique; and (3) even if a unique solution exists, eta may cause the solution to be ill conditioned.
NASA Astrophysics Data System (ADS)
Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua
2014-11-01
Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation robustness and accuracy of the fundamental matrix. Finally, we take an experiment for computing the relationship of a pair of stereo cameras to demonstrate accurate performance of the algorithm.
NASA Astrophysics Data System (ADS)
Burrell, Derek J.; Middlebrook, Christopher T.
2017-08-01
Wireless communication systems that employ free-space optical links in place of radio/microwave technologies carry substantial benefits in terms of data throughput, network security and design efficiency. Along with these advantages comes the challenge of counteracting signal degradation caused by atmospheric turbulence in free-space environments. A fully coherent laser source experiences random phase delays along its traversing path in turbulent conditions forming a speckle pattern and lowering the received signal-to-noise ratio upon detection. Preliminary research has shown that receiver-side speckle contrast may be significantly reduced and signal-to-noise ratio increased accordingly through the use of a partially coherent light source. While dynamic diffusers and adaptive optics solutions have been proven effective, they also add expense and complexity to a system that relies on accessibility and robustness for successful implementation. A custom Hadamard diffractive matrix design is used to statically induce partial coherence in a transmitted beam to increase signal-to-noise ratio for experimental turbulence scenarios. Atmospheric phase screens are generated using an open-source software package and subsequently loaded into a spatial light modulator using nematic liquid crystals to modulate the phase.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Visualization of Au Nanoparticles Buried in a Polymer Matrix by Scanning Thermal Noise Microscopy.
Yao, Atsushi; Kobayashi, Kei; Nosaka, Shunta; Kimura, Kuniko; Yamada, Hirofumi
2017-02-17
Several researchers have recently demonstrated visualization of subsurface features with a nanometer-scale resolution using various imaging schemes based on atomic force microscopy. Since all these subsurface imaging techniques require excitation of the oscillation of the cantilever and/or sample surface, it has been difficult to identify a key imaging mechanism. Here we demonstrate visualization of Au nanoparticles buried 300 nm into a polymer matrix by measurement of the thermal noise spectrum of a microcantilever with a tip in contact to the polymer surface. We show that the subsurface Au nanoparticles are detected as the variation in the contact stiffness and damping reflecting the viscoelastic properties of the polymer surface. The variation in the contact stiffness well agrees with the effective stiffness of a simple one-dimensional model, which is consistent with the fact that the maximum depth range of the technique is far beyond the extent of the contact stress field.
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
An experimental study on the noise correlation properties of CBCT projection data
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ouyang, Luo; Ma, Jianhua; Huang, Jing; Chen, Wufan; Wang, Jing
2014-03-01
In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation results in a lower noise level as compared to the PWLS criterion without considering the noise correlation at the matched resolution.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
An information theory model for dissipation in open quantum systems
NASA Astrophysics Data System (ADS)
Rogers, David M.
2017-08-01
This work presents a general model for open quantum systems using an information game along the lines of Jaynes’ original work. It is shown how an energy based reweighting of propagators provides a novel moment generating function at each time point in the process. Derivatives of the generating function give moments of the time derivatives of observables. Aside from the mathematically helpful properties, the ansatz reproduces key physics of stochastic quantum processes. At high temperature, the average density matrix follows the Caldeira-Leggett equation. Its associated Langevin equation clearly demonstrates the emergence of dissipation and decoherence time scales, as well as an additional diffusion due to quantum confinement. A consistent interpretation of these results is that decoherence and wavefunction collapse during measurement are directly related to the degree of environmental noise, and thus occur because of subjective uncertainty of an observer.
On modeling animal movements using Brownian motion with measurement error.
Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun
2014-02-01
Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.
St. Laurent, Georges; Savva, Yiannis A.; Kapranov, Philipp
2012-01-01
Perhaps no other topic in contemporary genomics has inspired such diverse viewpoints as the 95+% of the genome, previously known as “junk DNA,” that does not code for proteins. Here, we present a theory in which dark matter RNA plays a role in the generation of a landscape of spatial micro-domains coupled to the information signaling matrix of the nuclear landscape. Within and between these micro-domains, dark matter RNAs additionally function to tether RNA interacting proteins and complexes of many different types, and by doing so, allow for a higher performance of the various processes requiring them at ultra-fast rates. This improves signal to noise characteristics of RNA processing, trafficking, and epigenetic signaling, where competition and differential RNA binding among proteins drives the computational decisions inherent in regulatory events. PMID:22539933
Transport, shot noise, and topology in AC-driven dimer arrays
NASA Astrophysics Data System (ADS)
Niklas, Michael; Benito, Mónica; Kohler, Sigmund; Platero, Gloria
2016-11-01
We analyze an AC-driven dimer chain connected to a strongly biased electron source and drain. It turns out that the resulting transport exhibits fingerprints of topology. They are particularly visible in the driving-induced current suppression and the Fano factor. Thus, shot noise measurements provide a topological phase diagram as a function of the driving parameters. The observed phenomena can be explained physically by a mapping to an effective time-independent Hamiltonian and the emergence of edge states. Moreover, by considering quantum dissipation, we determine the requirements for the coherence properties in a possible experimental realization. For the computation of the zero-frequency noise, we develop an efficient method based on matrix-continued fractions.
State Estimation for Landing Maneuver on High Performance Aircraft
NASA Astrophysics Data System (ADS)
Suresh, P. S.; Sura, Niranjan K.; Shankar, K.
2018-01-01
State estimation methods are popular means for validating aerodynamic database on aircraft flight maneuver performance characteristics. In this work, the state estimation method during landing maneuver is explored for the first of its kind, using upper diagonal adaptive extended Kalman filter (UD-AEKF) with fuzzy based adaptive tunning of process noise matrix. The mathematical model for symmetrical landing maneuver consists of non-linear flight mechanics equation representing Aircraft longitudinal dynamics. The UD-AEKF algorithm is implemented in MATLAB environment and the states with bias is considered to be the initial conditions just prior to the flare. The measurement data is obtained from a non-linear 6 DOF pilot in loop simulation using FORTRAN. These simulated measurement data is additively mixed with process and measurement noises, which are used as an input for UD-AEKF. Then, the governing states that dictate the landing loads at the instant of touch down are compared. The method is verified using flight data wherein, the vertical acceleration at the aircraft center of gravity (CG) is compared. Two possible outcome of purely relying on the aircraft measured data is highlighted. It is observed that, with the implementation of adaptive fuzzy logic based extended Kalman filter tuned to adapt for aircraft landing dynamics, the methodology improves the data quality of the states that are sourced from noisy measurements.
Noise Modeling From Conductive Shields Using Kirchhoff Equations.
Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J
2010-10-09
Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
Correlated Noise: How it Breaks NMF, and What to Do About It.
Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D
2011-01-12
Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.
Correlated Noise: How it Breaks NMF, and What to Do About It
Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.
2010-01-01
Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Suspension parameter estimation in the frequency domain using a matrix inversion approach
NASA Astrophysics Data System (ADS)
Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.
2011-12-01
The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang
2018-05-08
When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Advanced readout methods for superheated emulsion detectors
NASA Astrophysics Data System (ADS)
d'Errico, F.; Di Fulvio, A.
2018-05-01
Superheated emulsions develop visible vapor bubbles when exposed to ionizing radiation. They consist in droplets of a metastable liquid, emulsified in an inert matrix. The formation of a bubble cavity is accompanied by sound waves. Evaporated bubbles also exhibit a lower refractive index, compared to the inert gel matrix. These two physical phenomena have been exploited to count the number of evaporated bubbles and thus measure the interacting radiation flux. Systems based on piezoelectric transducers have been traditionally used to acquire the acoustic (pressure) signals generated by bubble evaporation. Such systems can operate at ambient noise levels exceeding 100 dB; however, they are affected by a significant dead time (>10 ms). An optical readout technique relying on the scattering of light by neutron-induced bubbles has been recently improved in order to minimize measurement dead time and ambient noise sensitivity. Beams of infra-red light from light-emitting diode (LED) sources cross the active area of the detector and are deflected by evaporated bubbles. The scattered light correlates with bubble density. Planar photodiodes are affixed along the detector length in optimized positions, allowing the detection of scattered light from the bubbles and minimizing the detection of direct light from the LEDs. A low-noise signal-conditioning stage has been designed and realized to amplify the current induced in the photodiodes by scattered light and to subtract the background signal due to intrinsic scattering within the detector matrix. The proposed amplification architecture maximizes the measurement signal-to-noise ratio, yielding a readout uncertainty of 6% (±1 SD), with 1000 evaporated bubbles in a detector active volume of 150 ml (6 cm detector diameter). In this work, we prove that the intensity of scattered light also relates to the bubble size, which can be controlled by applying an external pressure to the detector emulsion. This effect can be exploited during the readout procedure to minimize shadowing effects between bubbles, which become severe when the latter are several thousands. The detector we used in this work is based on superheated C-318 (octafluorocyclobutane), emulsified in 100 μm ± 10% (1 SD) diameter drops in an inert matrix of approximately 150 ml. The detector was operated at room temperature and ambient pressure.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Calibration of the COBE FIRAS instrument
NASA Technical Reports Server (NTRS)
Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.
1994-01-01
The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.
When noise is beneficial for sensory encoding: Noise adaptation can improve face processing.
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Redies, Christoph; Németh, Kornél; Kovács, Gyula
2017-10-01
The presence of noise usually impairs the processing of a stimulus. Here, we studied the effects of noise on face processing and show, for the first time, that adaptation to noise patterns has beneficial effects on face perception. We used noiseless faces that were either surrounded by random noise or presented on a uniform background as stimuli. In addition, the faces were either preceded by noise adaptors or not. Moreover, we varied the statistics of the noise so that its spectral slope either matched that of the faces or it was steeper or shallower. Results of parallel ERP recordings showed that the background noise reduces the amplitude of the face-evoked N170, indicating less intensive face processing. Adaptation to a noise pattern, however, led to reduced P1 and enhanced N170 amplitudes as well as to a better behavioral performance in two of the three noise conditions. This effect was also augmented by the presence of background noise around the target stimuli. Additionally, the spectral slope of the noise pattern affected the size of the P1, N170 and P2 amplitudes. We reason that the observed effects are due to the selective adaptation of noise-sensitive neurons present in the face-processing cortical areas, which may enhance the signal-to-noise-ratio. Copyright © 2017 Elsevier Inc. All rights reserved.
Time-Distance Helioseismology: Noise Estimation
NASA Astrophysics Data System (ADS)
Gizon, L.; Birch, A. C.
2004-10-01
As in global helioseismology, the dominant source of noise in time-distance helioseismology measurements is realization noise due to the stochastic nature of the excitation mechanism of solar oscillations. Characterizing noise is important for the interpretation and inversion of time-distance measurements. In this paper we introduce a robust definition of travel time that can be applied to very noisy data. We then derive a simple model for the full covariance matrix of the travel-time measurements. This model depends only on the expectation value of the filtered power spectrum and assumes that solar oscillations are stationary and homogeneous on the solar surface. The validity of the model is confirmed through comparison with SOHO MDI measurements in a quiet-Sun region. We show that the correlation length of the noise in the travel times is about half the dominant wavelength of the filtered power spectrum. We also show that the signal-to-noise ratio in quiet-Sun travel-time maps increases roughly as the square root of the observation time and is at maximum for a distance near half the length scale of supergranulation.
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
NASA Astrophysics Data System (ADS)
Liao, Mingle; Wu, Baojian; Hou, Jianhong; Qiu, Kun
2018-03-01
Large scale optical switches are essential components in optical communication network. We aim to build up a large scale optical switch matrix by the interconnection of silicon-based optical switch chips using 3-stage CLOS structure, where EDFAs are needed to compensate for the insertion loss of the chips. The optical signal-to-noise ratio (OSNR) performance of the resulting large scale optical switch matrix is investigated for TE-mode light and the experimental results are in agreement with the theoretical analysis. We build up a 64 ×64 switch matrix by use of 16 ×16 optical switch chips and the OSNR and receiver sensibility can respectively be improved by 0.6 dB and 0.2 dB by optimizing the gain configuration of the EDFAs.
Electronic Noise and Fluctuations in Solids
NASA Astrophysics Data System (ADS)
Kogan, Sh.
2008-07-01
Preface; Part I. Introduction. Some Basic Concepts of the Theory of Random Processes: 1. Probability density functions. Moments. Stationary processes; 2. Correlation function; 3. Spectral density of noise; 4. Ergodicity and nonergodicity of random processes; 5. Random pulses and shot noise; 6. Markov processes. General theory; 7. Discrete Markov processes. Random telegraph noise; 8. Quasicontinuous (Diffusion-like) Markov processes; 9. Brownian motion; 10. Langevin approach to the kinetics of fluctuations; Part II. Fluctuation-Dissipation Relations in Equilibrium Systems: 11. Derivation of fluctuation-dissipation relations; 12. Equilibrium noise in quasistationary circuits. Nyquist theorem; 13. Fluctuations of electromagnetic fields in continuous media; Part III. Fluctuations in Nonequilibrium Gases: 14. Some basic concepts of hot-electrons' physics; 15. Simple model of current fluctuations in a semiconductor with hot electrons; 16. General kinetic theory of quasiclassical fluctuations in a gas of particles. The Boltzmann-Langevin equation; 17. Current fluctuations and noise temperature; 18. Current fluctuations and diffusion in a gas of hot electrons; 19. One-time correlation in nonequilibrium gases; 20. Intervalley noise in multivalley semiconductors; 21. Noise of hot electrons emitting optical phonons in the streaming regime; 22. Noise in a semiconductor with a postbreakdown stable current filament; Part IV. Generation-recombination noise: 23. G-R noise in uniform unipolar semiconductors; 24. Noise produced by recombination and diffusion; Part V. Noise in quantum ballistic systems: 25. Introduction; 26. Equilibrium noise and shot noise in quantum conductors; 27. Modulation noise in quantum point contacts; 28. Transition from a ballistic conductor to a macroscopic one; 29. Noise in tunnel junctions; Part VI. Resistance noise in metals: 30. Incoherent scattering of electrons by mobile defects; 31. Effect of mobile scattering centers on the electron interference pattern; 32. Fluctuations of the number of diffusing scattering centers; 33. Temperature fluctuations and the corresponding noise; Part VII. Noise in strongly disordered conductors: 34. Basic ideas of the percolation theory; 35. Resistance fluctuations in percolation systems. 36. Experiments; Part VIII. Low-frequency noise with an 1/f-type spectrum and random telegraph noise: 37. Introduction; 38. Some general properties of 1/f noise; 39. Basic models of 1/f noise; 40./f noise in metals; 41. Low-frequency noise in semiconductors; 42. Magnetic noise in spin glasses and some other magnetic systems; 43. Temperature fluctuations as a possible source of 1/f noise; 44. Random telegraph noise; 45. Fluctuations with 1/f spectrum in other systems; 46. General conclusions on 1/f noise; Part IX. Noise in Superconductors and Superconducting Structures: 47. Noise in Josephson junctions; 48. Noise in type II superconductors; References; Subject index.
NASA Technical Reports Server (NTRS)
Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.
1984-01-01
Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.
Shot noise and electronic properties in the inversion-symmetric Weyl semimetal resonant structure
NASA Astrophysics Data System (ADS)
Yang, Yanling; Bai, Chunxu; Xu, Xiaoguang; Jiang, Yong
2018-02-01
Using the transfer matrix method, the authors combine the analytical formula with numerical calculation to explore the shot noise and conductance of massless Weyl fermions in the Weyl semimetal resonant junction. By varying the barrier strength, the structure of the junction, the Fermi energy, and the crystallographic angle, the shot noise and conductance can be tuned efficiently. For a quasiperiodic superlattice, in complete contrast to the conventional junction case, the effect of the disorder strength on the shot noise and conductance depends on the competition of classical tunneling and Klein tunneling. Moreover, the delta barrier structure is also vital in determining the shot noise and conductance. In particular, a universal Fano factor has been found in a single delta potential case, whereas the resonant structure of the Fano factor perfectly matches with the number of barriers in a delta potential superlattice. These results are crucial for engineering nanoelectronic devices based on this topological semimetal material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, J.D.; Woan, G.
Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less
Eigenvector decomposition of full-spectrum x-ray computed tomography.
Gonzales, Brian J; Lalush, David S
2012-03-07
Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Carrière, Simon D.; Chalikakis, Konstantinos; Danquigny, Charles; Davi, Hendrik; Mazzilli, Naomi; Ollivier, Chloé; Emblanch, Christophe
2016-11-01
Some portions of the porous rock matrix in the karst unsaturated zone (UZ) can contain large volumes of water and play a major role in water flow regulation. The essential results are presented of a local-scale study conducted in 2011 and 2012 above the Low Noise Underground Laboratory (LSBB - Laboratoire Souterrain à Bas Bruit) at Rustrel, southeastern France. Previous research revealed the geological structure and water-related features of the study site and illustrated the feasibility of specific hydrogeophysical measurements. In this study, the focus is on hydrodynamics at the seasonal and event timescales. Magnetic resonance sounding (MRS) measured a high water content (more than 10 %) in a large volume of rock. This large volume of water cannot be stored in fractures and conduits within the UZ. MRS was also used to measure the seasonal variation of water stored in the karst UZ. A process-based model was developed to simulate the effect of vegetation on groundwater recharge dynamics. In addition, electrical resistivity tomography (ERT) monitoring was used to assess preferential water pathways during a rain event. This study demonstrates the major influence of water flow within the porous rock matrix on the UZ hydrogeological functioning at both the local (LSBB) and regional (Fontaine de Vaucluse) scales. By taking into account the role of the porous matrix in water flow regulation, these findings may significantly improve karst groundwater hydrodynamic modelling, exploitation, and sustainable management.
Ren, Xinxin; Liu, Jia; Zhang, Chengsen; Luo, Hai
2013-03-15
With the rapid development of ambient mass spectrometry, the hybrid laser-based ambient ionization methods which can generate multiply charged ions of large biomolecules and also characterize small molecules with good signal-to-noise in both positive and negative ion modes are of particular interest. An ambient ionization method termed high-voltage-assisted laser desorption ionization (HALDI) is developed, in which a 1064 nm laser is used to desorb various liquid samples from the sample target biased at a high potential without the need for an organic matrix. The pre-charged liquid samples are desorbed by the laser to form small charged droplets which may undergo an electrospray-like ionization process to produce multiply charged ions of large biomolecules. Various samples including proteins, oligonucleotides (ODNs), drugs, whole milk and chicken eggs have been analyzed by HALDI-MS in both positive and negative ion mode with little or no sample preparation. In addition, HALDI can generate intense signals with better signal-to-noise in negative ion mode than laser desorption spay post-ionization (LDSPI) from the same samples, such as ODNs and some carboxylic-group-containing small drug molecules. HALDI-MS can directly analyze a variety of liquid samples including proteins, ODNs, pharmaceuticals and biological fluids in both positive and negative ion mode without the use of an organic matrix. This technique may be further developed into a useful tool for rapid analysis in many different fields such as pharmaceutical, food, and biological sciences. Copyright © 2013 John Wiley & Sons, Ltd.
The subjective importance of noise spectral content
NASA Astrophysics Data System (ADS)
Baxter, Donald; Phillips, Jonathan; Denman, Hugh
2014-01-01
This paper presents secondary Standard Quality Scale (SQS2) rankings in overall quality JNDs for a subjective analysis of the 3 axes of noise, amplitude, spectral content, and noise type, based on the ISO 20462 softcopy ruler protocol. For the initial pilot study, a Python noise simulation model was created to generate the matrix of noise masks for the softcopy ruler base images with different levels of noise, different low pass filter noise bandwidths and different band pass filter center frequencies, and 3 different types of noise: luma only, chroma only, and luma and chroma combined. Based on the lessons learned, the full subjective experiment, involving 27 observers from Google, NVIDIA and STMicroelectronics was modified to incorporate a wider set of base image scenes, and the removal of band pass filtered noise masks to ease observer fatigue. Good correlation was observed with the Aptina subjective noise study. The absence of tone mapping in the noise simulation model visibly reduced the contrast at high levels of noise, due to the clipping of the high levels of noise near black and white. Under the 34-inch viewing distance, no significant difference was found between the luma only noise masks and the combined luma and chroma noise masks. This was not the intuitive expectation. Two of the base images with large uniform areas, `restaurant' and `no parking', were found to be consistently more sensitive to noise than the texture rich scenes. Two key conclusions are (1) there are fundamentally different sensitivities to noise on a flat patch versus noise in real images and (2) magnification of an image accentuates visual noise in a way that is non-representative of typical noise reduction algorithms generating the same output frequency. Analysis of our experimental noise masks applied to a synthetic Macbeth ColorChecker Chart confirmed the color-dependent nature of the visibility of luma and chroma noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wronski, M.; Zhao, W.; Tanioka, K.
Purpose: The authors are investigating the feasibility of a new type of solid-state x-ray imaging sensor with programmable avalanche gain: scintillator high-gain avalanche rushing photoconductor active matrix flat panel imager (SHARP-AMFPI). The purpose of the present work is to investigate the inherent x-ray detection properties of SHARP and demonstrate its wide dynamic range through programmable gain. Methods: A distributed resistive layer (DRL) was developed to maintain stable avalanche gain operation in a solid-state HARP. The signal and noise properties of the HARP-DRL for optical photon detection were investigated as a function of avalanche gain both theoretically and experimentally, and themore » results were compared with HARP tube (with electron beam readout) used in previous investigations of zero spatial frequency performance of SHARP. For this new investigation, a solid-state SHARP x-ray image sensor was formed by direct optical coupling of the HARP-DRL with a structured cesium iodide (CsI) scintillator. The x-ray sensitivity of this sensor was measured as a function of avalanche gain and the results were compared with the sensitivity of HARP-DRL measured optically. The dynamic range of HARP-DRL with variable avalanche gain was investigated for the entire exposure range encountered in radiography/fluoroscopy (R/F) applications. Results: The signal from HARP-DRL as a function of electric field showed stable avalanche gain, and the noise associated with the avalanche process agrees well with theory and previous measurements from a HARP tube. This result indicates that when coupled with CsI for x-ray detection, the additional noise associated with avalanche gain in HARP-DRL is negligible. The x-ray sensitivity measurements using the SHARP sensor produced identical avalanche gain dependence on electric field as the optical measurements with HARP-DRL. Adjusting the avalanche multiplication gain in HARP-DRL enabled a very wide dynamic range which encompassed all clinically relevant medical x-ray exposures. Conclusions: This work demonstrates that the HARP-DRL sensor enables the practical implementation of a SHARP solid-state x-ray sensor capable of quantum noise limited operation throughout the entire range of clinically relevant x-ray exposures. This is an important step toward the realization of a SHARP-AMFPI x-ray flat-panel imager.« less
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
Background recovery via motion-based robust principal component analysis with matrix factorization
NASA Astrophysics Data System (ADS)
Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping
2018-03-01
Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.
Rader, T
2015-02-01
Cochlear implantation with the aim of hearing preservation for combined electric-acoustic stimulation (EAS) is the therapy of choice for patients with residual low-frequency hearing. Preserved residual acoustic hearing has a positive effect on speech intelligibility in difficult noise conditions. The goal of this study was to assess speech reception thresholds in various complex noise conditions for patients with EAS in comparison with patients using bilateral cochlear implants (CI). Speech perception in noise was measured for bilateral CI and EAS patient groups. A total of 22 listeners with normal hearing served as a control group. Speech reception thresholds (SRT) were measured using a closed-set sentence matrix test. Speech was presented with a single source in frontal position; noise was presented in frontal position or in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources. Modulated speech-simulating noise and pseudocontinuous noise served respectively as interference signal with different temporal characteristics. The average SRTs in the EAS group were significantly better in all test conditions than those of the group with bilateral CI. Both user groups showed significant improvement in the MSNF condition compared with the frontal noise condition as a result of bilateral interaction. The normal-hearing control group was able to use short temporal gaps in modulated noise to improve speech perception in noise (gap listening). This effect was absent in both implanted user groups. Patients with combined EAS in one ear and a hearing aid in the contralateral ear show significantly improved speech perception in complex noise conditions compared with bilateral CI recipients.
NASA Technical Reports Server (NTRS)
Stanley, William D.
1994-01-01
An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.
Chan, Woei-Leong; Hsiao, Fei-Bin
2011-01-01
This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly. PMID:22163819
Chan, Woei-Leong; Hsiao, Fei-Bin
2011-01-01
This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly.
Dynamic visual noise reduces confidence in short-term memory for visual information.
Kemps, Eva; Andrade, Jackie
2012-05-01
Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana
2017-09-01
A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.
Occupational Noise Reduction in CNC Striping Process
NASA Astrophysics Data System (ADS)
Mahmad Khairai, Kamarulzaman; Shamime Salleh, Nurul; Razlan Yusoff, Ahmad
2018-03-01
Occupational noise hearing loss with high level exposure is common occupational hazards. In CNC striping process, employee that exposed to high noise level for a long time as 8-hour contributes to hearing loss, create physical and psychological stress that reduce productivity. In this paper, CNC stripping process with high level noises are measured and reduced to the permissible noise exposure. First condition is all machines shutting down and second condition when all CNC machine under operations. For both conditions, noise exposures were measured to evaluate the noise problems and sources. After improvement made, the noise exposures were measured to evaluate the effectiveness of reduction. The initial average noise level at the first condition is 95.797 dB (A). After the pneumatic system with leakage was solved, the noise reduced to 55.517 dB (A). The average noise level at the second condition is 109.340 dB (A). After six machines were gathered at one area and cover that area with plastic curtain, the noise reduced to 95.209 dB (A). In conclusion, the noise level exposure in CNC striping machine is high and exceed the permissible noise exposure can be reduced to acceptable levels. The reduction of noise level in CNC striping processes enhanced productivity in the industry.
Dynamic visual noise affects visual short-term memory for surface color, but not spatial location.
Dent, Kevin
2010-01-01
In two experiments participants retained a single color or a set of four spatial locations in memory. During a 5 s retention interval participants viewed either flickering dynamic visual noise or a static matrix pattern. In Experiment 1 memory was assessed using a recognition procedure, in which participants indicated if a particular test stimulus matched the memorized stimulus or not. In Experiment 2 participants attempted to either reproduce the locations or they picked the color from a whole range of possibilities. Both experiments revealed effects of dynamic visual noise (DVN) on memory for colors but not for locations. The implications of the results for theories of working memory and the methodological prospects for DVN as an experimental tool are discussed.
Longitudinal Relaxation of Ferromagnetic Grains
NASA Astrophysics Data System (ADS)
Würger, Alois
1998-07-01
We study the activated longitudinal dynamics of a small single-domain magnet with uniaxial anisotropy, coupled to quantum noise. The smallest finite eigenvalue λ1 = γ0e-EB/kBT of the relaxation matrix is evaluated in a controlled approximation. For white noise we find γ0~T-1 at moderate temperatures and γ0 = const at very low T. Coupling to elastic waves leads to a prefactor that is linear in T or constant, depending on temperature. At very low T, the discreteness of the energy spectrum is crucial.
Lee, C Y; Lee, D E; Hong, Y K; Shim, J H; Jeong, C K; Joo, J; Zang, D S; Shim, M G; Lee, J J; Cha, J K; Yang, H G
2003-04-01
We have developed an electromagnetic (EM) wave propagation theory through a single layer and multiple layers in the near-field and far-field regions, and have constructed a matrix formalism in terms of the boundary conditions of the EM waves. From the shielding efficiency (SE) against EM radiation in the near-field region calculated by using the matrix formalism, we propose that the effect of multiple layers yields enhanced shielding capability compared to a single layer with the same total thickness in conducting layers as the multiple layers. We compare the intensities of an EM wave propagating through glass coated with conducting indium tin oxide (ITO) on one side and on both sides, applying it to the electromagnetic interference (EMI) shielding filter in a flat panel display such as a plasma display panel (PDP). From the measured intensities of EMI noise generated by a PDP loaded with ITO coated glass samples, the two-side coated glass shows a lower intensity of EMI noise compared to the one-side coated glass. The result confirms the enhancement of the SE due to the effect of multiple layers, as expected in the matrix formalism of EM wave propagation in the near-field region. In the far-field region, the two-side coated glass with ITO in multiple layers has a higher SE than the one-side coated glass with ITO, when the total thickness of ITO in both cases is the same.
Prentice, Boone M; Chumbley, Chad W; Hachey, Brian C; Norris, Jeremy L; Caprioli, Richard M
2016-10-04
Quantitative matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) approaches have historically suffered from poor accuracy and precision mainly due to the nonuniform distribution of matrix and analyte across the target surface, matrix interferences, and ionization suppression. Tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity as well as improve signal-to-noise ratios by eliminating interferences from chemical noise, alleviating some concerns about dynamic range. However, conventional MALDI TOF/TOF modalities typically only scan for a single MS/MS event per laser shot, and multiplex assays require sequential analyses. We describe here new methodology that allows for multiple TOF/TOF fragmentation events to be performed in a single laser shot. This technology allows the reference of analyte intensity to that of the internal standard in each laser shot, even when the analyte and internal standard are quite disparate in m/z, thereby improving quantification while maintaining chemical specificity and duty cycle. In the quantitative analysis of the drug enalapril in pooled human plasma with ramipril as an internal standard, a greater than 4-fold improvement in relative standard deviation (<10%) was observed as well as improved coefficients of determination (R 2 ) and accuracy (>85% quality controls). Using this approach we have also performed simultaneous quantitative analysis of three drugs (promethazine, enalapril, and verapamil) using deuterated analogues of these drugs as internal standards.
Beyond the limits of present active matrix flat-panel imagers (AMFPIs) for diagnostic radiology
NASA Astrophysics Data System (ADS)
Antonuk, Larry E.; El-Mohri, Youcef; Jee, Kyung-Wook; Maolinbay, Manat; Nassif, Samer C.; Rong, Xiujiang; Siewerdsen, Jeffrey H.; Zhao, Qihua; Street, Robert A.
1999-05-01
A theoretical cascaded systems analysis of the performance limits of x-ray imagers based on thin-film, active matrix flat-panel technology is presented. This analysis specifically focuses upon an examination of the functional dependence of the detective quantum efficiency on exposure. While the DQE of AMFPI systems is relatively high at the large exposure levels associated with radiographic x-ray imaging, there is a significant decline in DQE with decreasing exposure over the medium and lower end of the exposure range associated with fluoroscopic imaging. This fall-off in DQE originates from the relatively large size of the additive noise of AMFPI systems compared to their overall system gain. Therefore, strategies to diminish additive noise and increase system gain should significantly improve performance. Potential strategies for noise reduction include the use of charge compensation lines while strategies for gain enhancement include continuous photodiodes, pixel amplification structures, or higher gain converters. The effect of the implementation of such strategies is examined for a variety for hypothetical imager configurations. Through the modeling of these configurations, such enhancements are shown to hold the potential of making low frequency DQE response large and essentially independent of exposure while greatly reducing the fall-off in DQE at higher spatial frequencies.
[Object Separation from Medical X-Ray Images Based on ICA].
Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun
2015-03-01
X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.
NASA Astrophysics Data System (ADS)
Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.
2015-05-01
ePix10k is a variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. The ASIC is optimized for high dynamic range application requiring high spatial resolution and fast frame rates. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix10k variant has 100um×100um pixels arranged in a 176×192 matrix, a resolution of 140e- r.m.s. and a signal range of 3.5pC (10k photons at 8keV). In its final version it will be able to sustain a frame rate of 2kHz. A first prototype has been fabricated and characterized. Performance in terms of noise, linearity, uniformity, cross-talk, together with preliminary measurements with bump bonded sensors are reported here.
Luce, Robert; Hildebrandt, Peter; Kuhlmann, Uwe; Liesen, Jörg
2016-09-01
The key challenge of time-resolved Raman spectroscopy is the identification of the constituent species and the analysis of the kinetics of the underlying reaction network. In this work we present an integral approach that allows for determining both the component spectra and the rate constants simultaneously from a series of vibrational spectra. It is based on an algorithm for nonnegative matrix factorization that is applied to the experimental data set following a few pre-processing steps. As a prerequisite for physically unambiguous solutions, each component spectrum must include one vibrational band that does not significantly interfere with the vibrational bands of other species. The approach is applied to synthetic "experimental" spectra derived from model systems comprising a set of species with component spectra differing with respect to their degree of spectral interferences and signal-to-noise ratios. In each case, the species involved are connected via monomolecular reaction pathways. The potential and limitations of the approach for recovering the respective rate constants and component spectra are discussed. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
High-resolution CSR GRACE RL05 mascons
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2016-10-01
The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
NASA Astrophysics Data System (ADS)
Shokravi, H.; Bakhary, NH
2017-11-01
Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.
Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B
2016-05-21
The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.
LMI Based Robust Blood Glucose Regulation in Type-1 Diabetes Patient with Daily Multi-meal Ingestion
NASA Astrophysics Data System (ADS)
Mandal, S.; Bhattacharjee, A.; Sutradhar, A.
2014-04-01
This paper illustrates the design of a robust output feedback H ∞ controller for the nonlinear glucose-insulin (GI) process in a type-1 diabetes patient to deliver insulin through intravenous infusion device. The H ∞ design specification have been realized using the concept of linear matrix inequality (LMI) and the LMI approach has been used to quadratically stabilize the GI process via output feedback H ∞ controller. The controller has been designed on the basis of full 19th order linearized state-space model generated from the modified Sorensen's nonlinear model of GI process. The resulting controller has been tested with the nonlinear patient model (the modified Sorensen's model) in presence of patient parameter variations and other uncertainty conditions. The performance of the controller was assessed in terms of its ability to track the normoglycemic set point of 81 mg/dl with a typical multi-meal disturbance throughout a day that yields robust performance and noise rejection.
Compact hybrid optoelectrical unit for image processing and recognition
NASA Astrophysics Data System (ADS)
Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu
1998-07-01
In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
NASA Astrophysics Data System (ADS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.
2006-06-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).
Accelerated 2D magnetic resonance spectroscopy of single spins using matrix completion
NASA Astrophysics Data System (ADS)
Scheuer, Jochen; Stark, Alexander; Kost, Matthias; Plenio, Martin B.; Naydenov, Boris; Jelezko, Fedor
2015-12-01
Two dimensional nuclear magnetic resonance (NMR) spectroscopy is one of the major tools for analysing the chemical structure of organic molecules and proteins. Despite its power, this technique requires long measurement times, which, particularly in the recently emerging diamond based single molecule NMR, limits its application to stable samples. Here we demonstrate a method which allows to obtain the spectrum by collecting only a small fraction of the experimental data. Our method is based on matrix completion which can recover the full spectral information from randomly sampled data points. We confirm experimentally the applicability of this technique by performing two dimensional electron spin echo envelope modulation (ESEEM) experiments on a two spin system consisting of a single nitrogen vacancy (NV) centre in diamond coupled to a single 13C nuclear spin. The signal to noise ratio of the recovered 2D spectrum is compared to the Fourier transform of randomly subsampled data, where we observe a strong suppression of the noise when the matrix completion algorithm is applied. We show that the peaks in the spectrum can be obtained with only 10% of the total number of the data points. We believe that our results reported here can find an application in all types of two dimensional spectroscopy, as long as the measured matrices have a low rank.
NASA Astrophysics Data System (ADS)
Wittman, David M.; Benson, Bryant
2018-06-01
Weak lensing analyses use the image---the intensity field---of a distant galaxy to infer gravitational effects on that line of sight. What if we analyze the velocity field instead? We show that lensing imprints much more information onto a highly ordered velocity field, such as that of a rotating disk galaxy, than onto an intensity field. This is because shuffling intensity pixels yields a post-lensed image quite similar to an unlensed galaxy with a different orientation, a problem known as "shape noise." We show that velocity field analysis can eliminate shape noise and yield much more precise lensing constraints. Furthermore, convergence as well as shear can be constrained using the same target, and there is no need to assume the weak lensing limit of small convergence. We present Fisher matrix forecasts of the precision achievable with this method. Velocity field observations are expensive, so we derive guidelines for choosing suitable targets by exploring how precision varies with source parameters such as inclination angle and redshift. Finally, we present simulations that support our Fisher matrix forecasts.
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Visualization of Au Nanoparticles Buried in a Polymer Matrix by Scanning Thermal Noise Microscopy
Yao, Atsushi; Kobayashi, Kei; Nosaka, Shunta; Kimura, Kuniko; Yamada, Hirofumi
2017-01-01
Several researchers have recently demonstrated visualization of subsurface features with a nanometer-scale resolution using various imaging schemes based on atomic force microscopy. Since all these subsurface imaging techniques require excitation of the oscillation of the cantilever and/or sample surface, it has been difficult to identify a key imaging mechanism. Here we demonstrate visualization of Au nanoparticles buried 300 nm into a polymer matrix by measurement of the thermal noise spectrum of a microcantilever with a tip in contact to the polymer surface. We show that the subsurface Au nanoparticles are detected as the variation in the contact stiffness and damping reflecting the viscoelastic properties of the polymer surface. The variation in the contact stiffness well agrees with the effective stiffness of a simple one-dimensional model, which is consistent with the fact that the maximum depth range of the technique is far beyond the extent of the contact stress field. PMID:28210001
A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis
NASA Technical Reports Server (NTRS)
Lane, John
2009-01-01
A simple 2D M N matrix involving sample preparation enables the microanalyst to peer below the noise floor of element percentages reported by the SEM/EDX (scanning electron microscopy/ energy dispersive x-ray) analysis, thus yielding more meaningful data. Using the example of a 2 3 sample set, there are M = 2 concentration levels of the original mix under test: 10 percent ilmenite (90 percent silica) and 20 percent ilmenite (80 percent silica). For each of these M samples, N = 3 separate SEM/EDX samples were drawn. In this test, ilmenite is the element of interest. By plotting the linear trend of the M sample s known concentration versus the average of the N samples, a much higher resolution of elemental analysis can be performed. The resulting trend also shows how the noise is affecting the data, and at what point (of smaller concentrations) is it impractical to try to extract any further useful data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano
The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noisemore » coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.« less
Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey
2017-09-01
Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Industrial noise level study in a wheat processing factory in ilorin, nigeria
NASA Astrophysics Data System (ADS)
Ibrahim, I.; Ajao, K. R.; Aremu, S. A.
2016-05-01
An industrial process such as wheat processing generates significant noise which can cause adverse effects on workers and the general public. This study assessed the noise level at a wheat processing mill in Ilorin, Nigeria. A portable digital sound level meter HD600 manufactured by Extech Inc., USA was used to determine the noise level around various machines, sections and offices in the factory at pre-determined distances. Subjective assessment was also mode using a World Health Organization (WHO) standard questionnaire to obtain information regarding noise ratings, effect of noise on personnel and noise preventive measures. The result of the study shows that the highest noise of 99.4 dBA was recorded at a pressure blower when compared to other machines. WHO Class-4 hearing protector is recommended for workers on the shop floor and room acoustics should be upgraded to absorb some sounds transmitted to offices.
NASA Astrophysics Data System (ADS)
Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun
2018-05-01
In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.
NASA Astrophysics Data System (ADS)
Russo, Giovanni; Shorten, Robert
2018-04-01
This paper is concerned with the study of common noise-induced synchronization phenomena in complex networks of diffusively coupled nonlinear systems. We consider the case where common noise propagation depends on the network state and, as a result, the noise diffusion process at the nodes depends on the state of the network. For such networks, we present an algebraic sufficient condition for the onset of synchronization, which depends on the network topology, the dynamics at the nodes, the coupling strength and the noise diffusion. Our result explicitly shows that certain noise diffusion processes can drive an unsynchronized network towards synchronization. In order to illustrate the effectiveness of our result, we consider two applications: collective decision processes and synchronization of chaotic systems. We explicitly show that, in the former application, a sufficiently large noise can drive a population towards a common decision, while, in the latter, we show how common noise can synchronize a network of Lorentz chaotic systems.
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
Spin relaxation 1/f noise in graphene
NASA Astrophysics Data System (ADS)
Omar, S.; Guimarães, M. H. D.; Kaverzin, A.; van Wees, B. J.; Vera-Marun, I. J.
2017-02-01
We report the first measurement of 1/f type noise associated with electronic spin transport, using single layer graphene as a prototypical material with a large and tunable Hooge parameter. We identify the presence of two contributions to the measured spin-dependent noise: contact polarization noise from the ferromagnetic electrodes, which can be filtered out using the cross-correlation method, and the noise originated from the spin relaxation processes. The noise magnitude for spin and charge transport differs by three orders of magnitude, implying different scattering mechanisms for the 1/f fluctuations in the charge and spin transport processes. A modulation of the spin-dependent noise magnitude by changing the spin relaxation length and time indicates that the spin-flip processes dominate the spin-dependent noise.
Hybrid Wing Body Aircraft Acoustic Test Preparations and Facility Upgrades
NASA Technical Reports Server (NTRS)
Heath, Stephanie L.; Brooks, Thomas F.; Hutcheson, Florence V.; Doty, Michael J.; Haskin, Henry H.; Spalt, Taylor B.; Bahr, Christopher J.; Burley, Casey L.; Bartram, Scott M.; Humphreys, William M.;
2013-01-01
NASA is investigating the potential of acoustic shielding as a means to reduce the noise footprint at airport communities. A subsonic transport aircraft and Langley's 14- by 22-foot Subsonic Wind Tunnel were chosen to test the proposed "low noise" technology. The present experiment studies the basic components of propulsion-airframe shielding in a representative flow regime. To this end, a 5.8-percent scale hybrid wing body model was built with dual state-of-the-art engine noise simulators. The results will provide benchmark shielding data and key hybrid wing body aircraft noise data. The test matrix for the experiment contains both aerodynamic and acoustic test configurations, broadband turbomachinery and hot jet engine noise simulators, and various airframe configurations which include landing gear, cruise and drooped wing leading edges, trailing edge elevons and vertical tail options. To aid in this study, two major facility upgrades have occurred. First, a propane delivery system has been installed to provide the acoustic characteristics with realistic temperature conditions for a hot gas engine; and second, a traversing microphone array and side towers have been added to gain full spectral and directivity noise characteristics.
Scheuermann, James R; Howansky, Adrian; Hansroul, Marc; Léveillé, Sébastien; Tanioka, Kenkichi; Zhao, Wei
2018-02-01
We present the first prototype Scintillator High-Gain Avalanche Rushing Photoconductor Active Matrix Flat Panel Imager (SHARP-AMFPI). This detector includes a layer of avalanche amorphous Selenium (a-Se) (HARP) as the photoconductor in an indirect detector to amplify the signal and reduce the effects of electronic noise to obtain quantum noise-limited images for low-dose applications. It is the first time avalanche a-Se has been used in a solid-state imaging device and poses as a possible solution to eliminate the effects of electronic noise, which is crucial for low-dose imaging performance of AMFPI. We successfully deposited a solid-state HARP structure onto a 24 × 30 cm 2 array of thin-film transistors (TFT array) with a pixel pitch of 85 μm. The HARP layer consists of 16 μm of a-Se with a hole-blocking and electron-blocking layer to prevent charge injection from the high-voltage bias and pixel electrodes, respectively. An electric field (E S e ) up to 105 V μm -1 was applied across the a-Se layer without breakdown. A 150 μm thick-structured CsI:Tl scintillator was used to form SHARP-AMFPI. The x-ray imaging performance is characterized using a 30 kVp Mo/Mo beam. We evaluate the spatial resolution, noise power, and detective quantum efficiency at zero frequency of the system with and without avalanche gain. The results are analyzed using cascaded linear system model (CLSM). An avalanche gain of 76 ± 5 was measured at E S e = 105 V μm -1 . We demonstrate that avalanche gain can amplify the signal to overcome electronic noise. As avalanche gain is increased, image quality improves for a constant (0.76 mR) exposure until electronic noise is overcome. Our system is currently limited by poor optical transparency of our high-voltage electrode and long integrating time which results in dark current noise. These two effects cause high-spatial frequency noise to dominate imaging performance. We demonstrate the feasibility of a solid-state HARP x-ray imager and have fabricated the largest active area HARP sensor to date. Procedures to reduce secondary quantum and dark noise are outlined. Future work will improve optical coupling and charge transport which will allow for frequency DQE and temporal metrics to be obtained. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ZHANG, H; Huang, J; Ma, J
2014-06-15
Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less
Effects of noise upon human information processing
NASA Technical Reports Server (NTRS)
Cohen, H. H.; Conrad, D. W.; Obrien, J. F.; Pearson, R. G.
1974-01-01
Studies of noise effects upon human information processing are described which investigated whether or not effects of noise upon performance are dependent upon specific characteristics of noise stimulation and their interaction with task conditions. The difficulty of predicting noise effects was emphasized. Arousal theory was considered to have explanatory value in interpreting the findings of all the studies. Performance under noise was found to involve a psychophysiological cost, measured by vasoconstriction response, with the degree of response cost being related to scores on a noise annoyance sensitivity scale. Noise sensitive subjects showed a greater autonomic response under noise stimulation.
Noise in two-color electronic distance meter measurements revisited
Langbein, J.
2004-01-01
Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.
Zhou, Zhenyu; Liu, Wei; Cui, Jiali; Wang, Xunheng; Arias, Diana; Wen, Ying; Bansal, Ravi; Hao, Xuejun; Wang, Zhishun; Peterson, Bradley S; Xu, Dongrong
2011-02-01
Signal variation in diffusion-weighted images (DWIs) is influenced both by thermal noise and by spatially and temporally varying artifacts, such as rigid-body motion and cardiac pulsation. Motion artifacts are particularly prevalent when scanning difficult patient populations, such as human infants. Although some motion during data acquisition can be corrected using image coregistration procedures, frequently individual DWIs are corrupted beyond repair by sudden, large amplitude motion either within or outside of the imaging plane. We propose a novel approach to identify and reject outlier images automatically using local binary patterns (LBP) and 2D partial least square (2D-PLS) to estimate diffusion tensors robustly. This method uses an enhanced LBP algorithm to extract texture features from a local texture feature of the image matrix from the DWI data. Because the images have been transformed to local texture matrices, we are able to extract discriminating information that identifies outliers in the data set by extending a traditional one-dimensional PLS algorithm to a two-dimension operator. The class-membership matrix in this 2D-PLS algorithm is adapted to process samples that are image matrix, and the membership matrix thus represents varying degrees of importance of local information within the images. We also derive the analytic form of the generalized inverse of the class-membership matrix. We show that this method can effectively extract local features from brain images obtained from a large sample of human infants to identify images that are outliers in their textural features, permitting their exclusion from further processing when estimating tensors using the DWIs. This technique is shown to be superior in performance when compared with visual inspection and other common methods to address motion-related artifacts in DWI data. This technique is applicable to correct motion artifact in other magnetic resonance imaging (MRI) techniques (e.g., the bootstrapping estimation) that use univariate or multivariate regression methods to fit MRI data to a pre-specified model. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yolcu, Cem; Bérut, Antoine; Falasco, Gianmaria; Petrosyan, Artyom; Ciliberto, Sergio; Baiesi, Marco
2017-04-01
The effect of a change of noise amplitudes in overdamped diffusive systems is linked to their unperturbed behavior by means of a nonequilibrium fluctuation-response relation. This formula holds also for systems with state-independent nontrivial diffusivity matrices, as we show with an application to an experiment of two trapped and hydrodynamically coupled colloids, one of which is subject to an external random forcing that mimics an effective temperature. The nonequilibrium susceptibility of the energy to a variation of this driving is an example of our formulation, which improves an earlier version, as it does not depend on the time-discretization of the stochastic dynamics. This scheme holds for generic systems with additive noise and can be easily implemented numerically, thanks to matrix operations.
Underwater noise modelling for environmental impact assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farcas, Adrian; Thompson, Paul M.; Merchant, Nathan D., E-mail: nathan.merchant@cefas.co.uk
Assessment of underwater noise is increasingly required by regulators of development projects in marine and freshwater habitats, and noise pollution can be a constraining factor in the consenting process. Noise levels arising from the proposed activity are modelled and the potential impact on species of interest within the affected area is then evaluated. Although there is considerable uncertainty in the relationship between noise levels and impacts on aquatic species, the science underlying noise modelling is well understood. Nevertheless, many environmental impact assessments (EIAs) do not reflect best practice, and stakeholders and decision makers in the EIA process are often unfamiliarmore » with the concepts and terminology that are integral to interpreting noise exposure predictions. In this paper, we review the process of underwater noise modelling and explore the factors affecting predictions of noise exposure. Finally, we illustrate the consequences of errors and uncertainties in noise modelling, and discuss future research needs to reduce uncertainty in noise assessments.« less
Schoof, Tim; Rosen, Stuart
2014-01-01
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266
Neuromorphic Learning From Noisy Data
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Troudet, Terry
1993-01-01
Two reports present numerical study of performance of feedforward neural network trained by back-propagation algorithm in learning continuous-valued mappings from data corrupted by noise. Two types of noise considered: plant noise which affects dynamics of controlled process and data-processing noise, which occurs during analog processing and digital sampling of signals. Study performed with view toward use of neural networks as neurocontrollers to substitute for, or enhance, performances of human experts in controlling mechanical devices in presence of sensor and actuator noise and to enhance performances of more-conventional digital feedback electronic process controllers in noisy environments.
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Receiver function deconvolution using transdimensional hierarchical Bayesian inference
NASA Astrophysics Data System (ADS)
Kolb, J. M.; Lekić, V.
2014-06-01
Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.
Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)
NASA Astrophysics Data System (ADS)
(Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald
2017-08-01
HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
Space Shuttle communications RF switch matrix
NASA Technical Reports Server (NTRS)
Winch, R.
1979-01-01
The Shuttle Orbiter communications equipment includes phase modulation (PM) and frequency modulation (FM) channels. The PM section has the capability of routing high levels of energy (175 W) from any one of four transmitters to any one of four antennas, mutually exclusive. The FM channel uses a maximum of 15-W power routed from either of two transmitters to one of two antennas, mutually exclusive. The paper describes the design and the theory of a logic-controlled RF switch matrix devised for the purposes cited. Both PM and FM channels are computer-controlled with manual overrides. The logic interface is realized with CMOS logic for low power consumption and high noise immunity. The interior of the switch matrix is maintained at a pressure of 15 psi (90% nitrogen, 10% helium) by an electron beam-welded encapsulation. The computational results confirm the viability of the RF switch matrix concept.
Asymmetric correlation matrices: an analysis of financial data
NASA Astrophysics Data System (ADS)
Livan, G.; Rebecchi, L.
2012-06-01
We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-02-01
Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.
2012-01-01
This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015 (N+1), 2020 (N+2), and 2025 (N+3) timeframes; SFW strategic thrusts and technical challenges; SFW advanced subsystems that are broadly applicable to N+3 vehicle concepts, with an indication where further noise research is needed; the components of core noise (compressor, combustor and turbine noise) and a rationale for NASA's current emphasis on the combustor-noise component; the increase in the relative importance of core noise due to turbofan design trends; the need to understand and mitigate core-noise sources for high-efficiency small gas generators; and the current research activities in the core-noise area, with additional details given about forthcoming updates to NASA's Aircraft Noise Prediction Program (ANOPP) core-noise prediction capabilities, two NRA efforts (Honeywell International, Phoenix, AZ and University of Illinois at Urbana-Champaign, respectively) to improve the understanding of core-noise sources and noise propagation through the engine core, and an effort to develop oxide/oxide ceramic-matrix-composite (CMC) liners for broadband noise attenuation suitable for turbofan-core application. Core noise must be addressed to ensure that the N+3 noise goals are met. Focused, but long-term, core-noise research is carried out to enable the advanced high-efficiency small gas-generator subsystem, common to several N+3 conceptual designs, needed to meet NASA's technical challenges. Intermediate updates to prediction tools are implemented as the understanding of the source structure and engine-internal propagation effects is improved. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Quiet-Aircraft Subproject aims to develop concepts and technologies to reduce perceived community noise attributable to aircraft with minimal impact on weight and performance. This reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic.
Resolving ability and image discretization in the visual system.
Shelepin, Yu E; Bondarko, V M
2004-02-01
Psychophysiological studies were performed to measure the spatial threshold for resolution of two "points" and the thresholds for discriminating their orientations depending on the distance between the two points. Data were compared with the scattering of the "point" by the eye's optics, the packing density of cones in the fovea, and the characteristics of the receptive fields of ganglion cells in the foveal area of the retina and neurons in the corresponding projection zones of the primary visual cortex. The effective zone was shown to have to contain a scattering function for several receptors, as this allowed preliminary blurring of the image by the eye's optics to decrease the subsequent (at the level of receptors) discretization noise created by a matrix of receptors. The concordance of these parameters supports the optical operation of the spatial elements of the neural network determining the resolving ability of the visual system at different levels of visual information processing. It is suggested that the special geometry of the receptive fields of neurons in the striate cortex, which are concordant with the statistics of natural scenes, results in a further increase in the signal:noise ratio.
NASA Astrophysics Data System (ADS)
Abhinav, S.; Manohar, C. S.
2018-03-01
The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Precomputing Process Noise Covariance for Onboard Sequential Filters
NASA Technical Reports Server (NTRS)
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Precomputing Process Noise Covariance for Onboard Sequential Filters
NASA Technical Reports Server (NTRS)
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory, using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis publications is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Decoherence, discord, and the quantum master equation for cosmological perturbations
NASA Astrophysics Data System (ADS)
Hollowood, Timothy J.; McDonald, Jamie I.
2017-05-01
We examine environmental decoherence of cosmological perturbations in order to study the quantum-to-classical transition and the impact of noise on entanglement during inflation. Given an explicit interaction between the system and environment, we derive a quantum master equation for the reduced density matrix of perturbations, drawing parallels with quantum Brownian motion, where we see the emergence of fluctuation and dissipation terms. Although the master equation is not in Lindblad form, we see how typical solutions exhibit positivity on super-horizon scales, leading to a physically meaningful density matrix. This allows us to write down a Langevin equation with stochastic noise for the classical trajectories which emerge from the quantum system on super-horizon scales. In particular, we find that environmental decoherence increases in strength as modes exit the horizon, with the growth driven essentially by white noise coming from local contributions to environmental correlations. Finally, we use our master equation to quantify the strength of quantum correlations as captured by discord. We show that environmental interactions have a tendency to decrease the size of the discord and that these effects are determined by the relative strength of the expansion rate and interaction rate of the environment. We interpret this in terms of the competing effects of particle creation versus environmental fluctuations, which tend to increase and decrease the discord respectively.
Vibrations and structureborne noise in space station
NASA Technical Reports Server (NTRS)
Vaicaitis, R.; Lyrintzis, C. S.; Bofilios, D. A.
1987-01-01
Analytical models were developed to predict vibrations and structureborne noise generation of cylindrical and rectangular acoustic enclosures. These models are then used to determine structural vibration levels and interior noise to random point input forces. The guidelines developed could provide preliminary information on acoustical and vibrational environments in space station habitability modules under orbital operations. The structural models include single wall monocoque shell, double wall shell, stiffened orthotropic shell, descretely stiffened flat panels, and a coupled system composed of a cantilever beam structure and a stiffened sidewall. Aluminum and fiber reinforced composite materials are considered for single and double wall shells. The end caps of the cylindrical enclosures are modeled either as single or double wall circular plates. Sound generation in the interior space is calculated by coupling the structural vibrations to the acoustic field in the enclosure. Modal methods and transfer matrix techniques are used to obtain structural vibrations. Parametric studies are performed to determine the sensitivity of interior noise environment to changes in input, geometric and structural conditions.
Emergence of nonwhite noise in Langevin dynamics with magnetic Lorentz force
NASA Astrophysics Data System (ADS)
Chun, Hyun-Myung; Durang, Xavier; Noh, Jae Dong
2018-03-01
We investigate the low mass limit of Langevin dynamics for a charged Brownian particle driven by a magnetic Lorentz force. In the low mass limit, velocity variables relaxing quickly are coarse-grained out to yield effective dynamics for position variables. Without the Lorentz force, the low mass limit is equivalent to the high friction limit. Both cases share the same Langevin equation that is obtained by setting the mass to zero. The equivalence breaks down in the presence of the Lorentz force. The low mass limit cannot be achieved by setting the mass to zero. The limit is also distinct from the large friction limit. We derive the effective equations of motion in the low mass limit. The resulting stochastic differential equation involves a nonwhite noise whose correlation matrix has antisymmetric components. We demonstrate the importance of the nonwhite noise by investigating the heat dissipation by a driven Brownian particle, where the emergent nonwhite noise has a physically measurable effect.
Prediction of L70 lumen maintenance and chromaticity for LEDs using extended Kalman filter models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, Lynn
2013-09-30
Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, Lynn
2013-08-08
Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life is definedmore » by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. The measured state variable has been related to the underlying damage using physics-based models. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lall, Pradeep; Wei, Junchao; Davis, J Lynn
2014-06-24
Abstract— Solid-state lighting (SSL) luminaires containing light emitting diodes (LEDs) have the potential of seeing excessive temperatures when being transported across country or being stored in non-climate controlled warehouses. They are also being used in outdoor applications in desert environments that see little or no humidity but will experience extremely high temperatures during the day. This makes it important to increase our understanding of what effects high temperature exposure for a prolonged period of time will have on the usability and survivability of these devices. Traditional light sources “burn out” at end-of-life. For an incandescent bulb, the lamp life ismore » defined by B50 life. However, the LEDs have no filament to “burn”. The LEDs continually degrade and the light output decreases eventually below useful levels causing failure. Presently, the TM-21 test standard is used to predict the L70 life of LEDs from LM-80 test data. Several failure mechanisms may be active in a LED at a single time causing lumen depreciation. The underlying TM-21 Model may not capture the failure physics in presence of multiple failure mechanisms. Correlation of lumen maintenance with underlying physics of degradation at system-level is needed. In this paper, Kalman Filter (KF) and Extended Kalman Filters (EKF) have been used to develop a 70-percent Lumen Maintenance Life Prediction Model for LEDs used in SSL luminaires. Ten-thousand hour LM-80 test data for various LEDs have been used for model development. System state at each future time has been computed based on the state space at preceding time step, system dynamics matrix, control vector, control matrix, measurement matrix, measured vector, process noise and measurement noise. The future state of the lumen depreciation has been estimated based on a second order Kalman Filter model and a Bayesian Framework. Life prediction of L70 life for the LEDs used in SSL luminaires from KF and EKF based models have been compared with the TM-21 model predictions and experimental data.« less
NASA Technical Reports Server (NTRS)
Streett, C. L.; Lockard, D. P.; Singer, B. A.; Khorrami, M. R.; Choudhari, M. M.
2003-01-01
The LaRC investigative process for airframe noise has proven to be a useful guide for elucidation of the physics of flow-induced noise generation over the last five years. This process, relying on a close interplay between experiment and computation, is described and demonstrated here on the archetypal problem of flap-edge noise. Some detailed results from both experiment and computation are shown to illustrate the process, and a description of the multi-source physics seen in this problem is conjectured.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Measurement of hearing aid internal noise1
Lewis, James D.; Goodman, Shawn S.; Bentler, Ruth A.
2010-01-01
Hearing aid equivalent input noise (EIN) measures assume the primary source of internal noise to be located prior to amplification and to be constant regardless of input level. EIN will underestimate internal noise in the case that noise is generated following amplification. The present study investigated the internal noise levels of six hearing aids (HAs). Concurrent with HA processing of a speech-like stimulus with both adaptive features (acoustic feedback cancellation, digital noise reduction, microphone directionality) enabled and disabled, internal noise was quantified for various stimulus levels as the variance across repeated trials. Changes in noise level as a function of stimulus level demonstrated that (1) generation of internal noise is not isolated to the microphone, (2) noise may be dependent on input level, and (3) certain adaptive features may contribute to internal noise. Quantifying internal noise as the variance of the output measures allows for noise to be measured under real-world processing conditions, accounts for all sources of noise, and is predictive of internal noise audibility. PMID:20370034
Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hua; Ouyang, Luo; Wang, Jing, E-mail: jhma@smu.edu.cn, E-mail: jing.wang@utsouthwestern.edu
2014-03-15
Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at threemore » fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.« less
Research on strategy marine noise map based on i4ocean platform: Constructing flow and key approach
NASA Astrophysics Data System (ADS)
Huang, Baoxiang; Chen, Ge; Han, Yong
2016-02-01
Noise level in a marine environment has raised extensive concern in the scientific community. The research is carried out on i4Ocean platform following the process of ocean noise model integrating, noise data extracting, processing, visualizing, and interpreting, ocean noise map constructing and publishing. For the convenience of numerical computation, based on the characteristics of ocean noise field, a hybrid model related to spatial locations is suggested in the propagation model. The normal mode method K/I model is used for far field and ray method CANARY model is used for near field. Visualizing marine ambient noise data is critical to understanding and predicting marine noise for relevant decision making. Marine noise map can be constructed on virtual ocean scene. The systematic marine noise visualization framework includes preprocessing, coordinate transformation interpolation, and rendering. The simulation of ocean noise depends on realistic surface. Then the dynamic water simulation gird was improved with GPU fusion to achieve seamless combination with the visualization result of ocean noise. At the same time, the profile and spherical visualization include space, and time dimensionality were also provided for the vertical field characteristics of ocean ambient noise. Finally, marine noise map can be published with grid pre-processing and multistage cache technology to better serve the public.
Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.
Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V
2012-01-01
Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.
A CLT on the SNR of Diagonally Loaded MVDR Filters
NASA Astrophysics Data System (ADS)
Rubio, Francisco; Mestre, Xavier; Hachem, Walid
2012-08-01
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
A reexamination of ATS 6 magnetometer data for radially polarized Pc 3 magnetic pulsations
NASA Technical Reports Server (NTRS)
Takahashi, K.; Mcpherron, R. L.
1983-01-01
The polarization of Pc 3 (22-100 MHz) magnetic pulsations measured by the ATS 6 fluxgate magnetometer at synchronous orbit has been examined by using dynamic autospectral analysis. In contrast to the result obtained by Arthur et al. (1977) using the same data set, very few cases of radially polarized Pc 3 pulsations are found. It is suggested that satellite noise in the radial component, which depends on frequency f as 0.015/f (nT-squared/Hz), is responsible for this disagreement. In the presence of this type of noise, diagonalization of the spectral matrix can produce an erroneous major axis of polarization. Most Pc 3 pulsations classified as radially polarized by Arthur et al. appear to be a consequence of small amplitude azimuthal pulsations contaminated by satellite noise.
Long-term exposure to noise impairs cortical sound processing and attention control.
Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto
2004-11-01
Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.
2012-01-01
The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.
Quantum optics of lossy asymmetric beam splitters.
Uppu, Ravitej; Wolterink, Tom A W; Tentrup, Tristan B H; Pinkse, Pepijn W H
2016-07-25
We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering matrix. Our analysis using the noise operator formalism shows that the loss allows tunability of quantum interference to an extent not possible with a lossless beam splitter. Our theoretical studies support the experimental demonstrations of programmable quantum interference in highly multimodal systems such as opaque scattering media and multimode fibers.
Propagation of Environmental Noise
ERIC Educational Resources Information Center
Lyon, R. H.
1973-01-01
Solutions for environmental noise pollution lie in systematic study of many basic processes such as reflection, scattering, and spreading. Noise propagation processes should be identified in different situations and assessed for their relative importance. (PS)
The development of a Kalman filter clock predictor
NASA Technical Reports Server (NTRS)
Davis, John A.; Greenhall, Charles A.; Boudjemaa, Redoane
2005-01-01
A Kalman filter based clock predictor is developed, and its performance evaluated using both simulated and real data. The clock predictor is shown to possess a neat to optimal Prediction Error Variance (PEV) when the underlying noise consists of one of the power law noise processes commonly encountered in time and frequency measurements. The predictor's performance is the presence of multiple noise processes is also examined. The relationship between the PEV obtained in the presence of multiple noise processes and those obtained for the individual component noise processes is examined. Comparisons are made with a simple linear clock predictor. The clock predictor is used to predict future values of the time offset between pairs of NPL's active hydrogen masers.
Benefits of adaptive FM systems on speech recognition in noise for listeners who use hearing aids.
Thibodeau, Linda
2010-06-01
To compare the benefits of adaptive FM and fixed FM systems through measurement of speech recognition in noise with adults and students in clinical and real-world settings. Five adults and 5 students with moderate-to-severe hearing loss completed objective and subjective speech recognition in noise measures with the 2 types of FM processing. Sentence recognition was evaluated in a classroom for 5 competing noise levels ranging from 54 to 80 dBA while the FM microphone was positioned 6 in. from the signal loudspeaker to receive input at 84 dB SPL. The subjective measures included 2 classroom activities and 6 auditory lessons in a noisy, public aquarium. On the objective measures, adaptive FM processing resulted in significantly better speech recognition in noise than fixed FM processing for 68- and 73-dBA noise levels. On the subjective measures, all individuals preferred adaptive over fixed processing for half of the activities. Adaptive processing was also preferred by most (8-9) individuals for the remaining 4 activities. The adaptive FM processing resulted in significant improvements at the higher noise levels and was preferred by the majority of participants in most of the conditions.
Innovative Approach for Developing Spacecraft Interior Acoustic Requirement Allocation
NASA Technical Reports Server (NTRS)
Chu, S. Reynold; Dandaroy, Indranil; Allen, Christopher S.
2016-01-01
The Orion Multi-Purpose Crew Vehicle (MPCV) is an American spacecraft for carrying four astronauts during deep space missions. This paper describes an innovative application of Power Injection Method (PIM) for allocating Orion cabin continuous noise Sound Pressure Level (SPL) limits to the sound power level (PWL) limits of major noise sources in the Environmental Control and Life Support System (ECLSS) during all mission phases. PIM is simulated using both Statistical Energy Analysis (SEA) and Hybrid Statistical Energy Analysis-Finite Element (SEA-FE) models of the Orion MPCV to obtain the transfer matrix from the PWL of the noise sources to the acoustic energies of the receivers, i.e., the cavities associated with the cabin habitable volume. The goal of the allocation strategy is to control the total energy of cabin habitable volume for maintaining the required SPL limits. Simulations are used to demonstrate that applying the allocated PWLs to the noise sources in the models indeed reproduces the SPL limits in the habitable volume. The effects of Noise Control Treatment (NCT) on allocated noise source PWLs are investigated. The measurement of source PWLs of involved fan and pump development units are also discussed as it is related to some case-specific details of the allocation strategy discussed here.
Application of a TiO2 nanocomposite in earplugs, a case study of noise reduction.
Ibrahimi Ghavamabadi, Leila; Fouladi Dehaghi, Behzad; Hesampour, Morteza; Ahmadi Angali, Kambiz
2018-03-13
Use of hearing protection devices (HPDs) has become necessary when other control measures cannot reduce noise to a safe and standard level. In most countries, more effective hearing protection devices are in demand. The aim of this study was to examine the effects of titanium dioxide (TiO 2 ) nanoparticles on noise reduction efficiency in a polyvinyl chloride (PVC) earplug. An S-60 type PVC polymer as main matrix and TiO 2 with 30 nm size were used. PVC/TiO 2 nanocomposite was mixed at a temperature of 160 °C and 40 rounds per minute (rpm) and the samples were prepared with 0, 0.2 and 0.5 wt% of TiO 2 nanoparticle concentrations. Earplug samples with PVC/TiO 2 (0.2, 0.5 wt%) nanoparticles, when compared with raw earplugs, showed almost equal noise attenuation at low frequencies (500- 125 Hz). However, at high frequencies (2-8 kHz), the power of noise reduction of earplugs containing TiO 2 nanoparticles was significantly increased. The results of the present study showed that samples containing nanoparticles of TiO 2 had more noticeable noise reduction abilities at higher frequencies in comparison with samples without the nanoparticles.
Ceramic matrix composite article and process of fabricating a ceramic matrix composite article
Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert
2016-01-12
A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.
Quantum simulation of an ultrathin body field-effect transistor with channel imperfections
NASA Astrophysics Data System (ADS)
Vyurkov, V.; Semenikhin, I.; Filippov, S.; Orlikovsky, A.
2012-04-01
An efficient program for the all-quantum simulation of nanometer field-effect transistors is elaborated. The model is based on the Landauer-Buttiker approach. Our calculation of transmission coefficients employs a transfer-matrix technique involving the arbitrary precision (multiprecision) arithmetic to cope with evanescent modes. Modified in such way, the transfer-matrix technique turns out to be much faster in practical simulations than that of scattering-matrix. Results of the simulation demonstrate the impact of realistic channel imperfections (random charged centers and wall roughness) on transistor characteristics. The Landauer-Buttiker approach is developed to incorporate calculation of the noise at an arbitrary temperature. We also validate the ballistic Landauer-Buttiker approach for the usual situation when heavily doped contacts are indispensably included into the simulation region.
Auralization Architectures for NASA?s Next Generation Aircraft Noise Prediction Program
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Lopes, Leonard V.; Burley, Casey L.; Aumann, Aric R.
2013-01-01
Aircraft community noise is a significant concern due to continued growth in air traffic, increasingly stringent environmental goals, and operational limitations imposed by airport authorities. The assessment of human response to noise from future aircraft can only be afforded through laboratory testing using simulated flyover noise. Recent work by the authors demonstrated the ability to auralize predicted flyover noise for a state-of-the-art reference aircraft and a future hybrid wing body aircraft concept. This auralization used source noise predictions from NASA's Aircraft NOise Prediction Program (ANOPP) as input. The results from this process demonstrated that auralization based upon system noise predictions is consistent with, and complementary to, system noise predictions alone. To further develop and validate the auralization process, improvements to the interfaces between the synthesis capability and the system noise tools are required. This paper describes the key elements required for accurate noise synthesis and introduces auralization architectures for use with the next-generation ANOPP (ANOPP2). The architectures are built around a new auralization library and its associated Application Programming Interface (API) that utilize ANOPP2 APIs to access data required for auralization. The architectures are designed to make the process of auralizing flyover noise a common element of system noise prediction.
Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.
2012-01-01
Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.
Time delay and noise explaining the behaviour of the cell growth in fermentation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayuobi, Tawfiqullah; Rosli, Norhayati; Bahar, Arifah
2015-02-03
This paper proposes to investigate the interplay between time delay and external noise in explaining the behaviour of the microbial growth in batch fermentation process. Time delay and noise are modelled jointly via stochastic delay differential equations (SDDEs). The typical behaviour of cell concentration in batch fermentation process under this model is investigated. Milstein scheme is applied for solving this model numerically. Simulation results illustrate the effects of time delay and external noise in explaining the lag and stationary phases, respectively for the cell growth of fermentation process.
Time delay and noise explaining the behaviour of the cell growth in fermentation process
NASA Astrophysics Data System (ADS)
Ayuobi, Tawfiqullah; Rosli, Norhayati; Bahar, Arifah; Salleh, Madihah Md
2015-02-01
This paper proposes to investigate the interplay between time delay and external noise in explaining the behaviour of the microbial growth in batch fermentation process. Time delay and noise are modelled jointly via stochastic delay differential equations (SDDEs). The typical behaviour of cell concentration in batch fermentation process under this model is investigated. Milstein scheme is applied for solving this model numerically. Simulation results illustrate the effects of time delay and external noise in explaining the lag and stationary phases, respectively for the cell growth of fermentation process.
Noise removal using factor analysis of dynamic structures: application to cardiac gated studies.
Bruyant, P P; Sau, J; Mallet, J J
1999-10-01
Factor analysis of dynamic structures (FADS) facilitates the extraction of relevant data, usually with physiologic meaning, from a dynamic set of images. The result of this process is a set of factor images and curves plus some residual activity. The set of factor images and curves can be used to retrieve the original data with reduced noise using an inverse factor analysis process (iFADS). This improvement in image quality is expected because the inverse process does not use the residual activity, assumed to be made of noise. The goal of this work is to quantitate and assess the efficiency of this method on gated cardiac images. A computer simulation of a planar cardiac gated study was performed. The simulated images were added with noise and processed by the FADS-iFADS program. The signal-to-noise ratios (SNRs) were compared between original and processed data. Planar gated cardiac studies from 10 patients were tested. The data processed by FADS-iFADS were subtracted to the original data. The result of the substraction was studied to evaluate its noisy nature. The SNR is about five times greater after the FADS-iFADS process. The difference between original and processed data is noise only, i.e., processed data equals original data minus some white noise. The FADS-iFADS process is successful in the removal of an important part of the noise and therefore is a tool to improve the image quality of cardiac images. This tool does not decrease the spatial resolution (compared with smoothing filters) and does not lose details (compared with frequential filters). Once the number of factors is chosen, this method is not operator dependent.
Repair process and a repaired component
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, III, Herbert Chidsey; Simpson, Stanley F.
Matrix composite component repair processes are disclosed. The matrix composite repair process includes applying a repair material to a matrix composite component, securing the repair material to the matrix composite component with an external securing mechanism and curing the repair material to bond the repair material to the matrix composite component during the securing by the external securing mechanism. The matrix composite component is selected from the group consisting of a ceramic matrix composite, a polymer matrix composite, and a metal matrix composite. In another embodiment, the repair process includes applying a partially-cured repair material to a matrix composite component,more » and curing the repair material to bond the repair material to the matrix composite component, an external securing mechanism securing the repair material throughout a curing period, In another embodiment, the external securing mechanism is consumed or decomposed during the repair process.« less
Silencer! A Tool for Substrate Noise Coupling Analysis
2004-01-09
network for up to one hundred substrate ports. The solver uses the Laplace equation and then 17 transforms it with Green’s theorem into a...the contact center points can be calculated (using Pythagoras ) and saved in a n x n matrix: ( ) ( ) 2 2 xij cxj cxi yij cyj cyi dij xij yij
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation.
Gao, Siwei; Liu, Yanheng; Wang, Jian; Deng, Weiwen; Oh, Heekuck
2016-07-16
This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively adjust the measurement noise variance-covariance (V-C) matrix 'R' and the system noise V-C matrix 'Q'. Then, the global filter uses R to calculate the information allocation factor 'β' for data fusion. Finally, the global filter completes optimal data fusion and feeds back to the local filters to improve the measurement accuracy of the local filters. Extensive simulation and experimental results show that the JAKF has better adaptive ability and fault tolerance. JAKF enables one to bridge the gap of the accuracy difference of various sensors to improve the integral filtering effectivity. If any sensor breaks down, the filtered results of JAKF still can maintain a stable convergence rate. Moreover, the JAKF outperforms the conventional Kalman filter (CKF) and the innovation-based adaptive Kalman filter (IAKF) with respect to the accuracy of displacement, velocity, and acceleration, respectively.
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
Generalized sidelobe canceller beamforming method for ultrasound imaging.
Wang, Ping; Li, Na; Luo, Han-Wu; Zhu, Yong-Kun; Cui, Shi-Gang
2017-03-01
A modified generalized sidelobe canceller (IGSC) algorithm is proposed to enhance the resolution and robustness against the noise of the traditional generalized sidelobe canceller (GSC) and coherence factor combined method (GSC-CF). In the GSC algorithm, weighting vector is divided into adaptive and non-adaptive parts, while the non-adaptive part does not block all the desired signal. A modified steer vector of the IGSC algorithm is generated by the projection of the non-adaptive vector on the signal space constructed by the covariance matrix of received data. The blocking matrix is generated based on the orthogonal complementary space of the modified steer vector and the weighting vector is updated subsequently. The performance of IGSC was investigated by simulations and experiments. Through simulations, IGSC outperformed GSC-CF in terms of spatial resolution by 0.1 mm regardless there is noise or not, as well as the contrast ratio respect. The proposed IGSC can be further improved by combining with CF. The experimental results also validated the effectiveness of the proposed algorithm with dataset provided by the University of Michigan.
Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O
2007-06-01
Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.
Evaluation of Matrix9 silicon photomultiplier array for small-animal PET.
Du, Junwei; Schmall, Jeffrey P; Yang, Yongfeng; Di, Kun; Roncali, Emilie; Mitchell, Gregory S; Buckley, Steve; Jackson, Carl; Cherry, Simon R
2015-02-01
The MatrixSL-9-30035-OEM (Matrix9) from SensL is a large-area silicon photomultiplier (SiPM) photodetector module consisting of a 3 × 3 array of 4 × 4 element SiPM arrays (total of 144 SiPM pixels) and incorporates SensL's front-end electronics board and coincidence board. Each SiPM pixel measures 3.16 × 3.16 mm(2) and the total size of the detector head is 47.8 × 46.3 mm(2). Using 8 × 8 polished LSO/LYSO arrays (pitch 1.5 mm) the performance of this detector system (SiPM array and readout electronics) was evaluated with a view for its eventual use in small-animal positron emission tomography (PET). Measurements of noise, signal, signal-to-noise ratio, energy resolution, flood histogram quality, timing resolution, and array trigger error were obtained at different bias voltages (28.0-32.5 V in 0.5 V intervals) and at different temperatures (5 °C-25 °C in 5 °C degree steps) to find the optimal operating conditions. The best measured signal-to-noise ratio and flood histogram quality for 511 keV gamma photons were obtained at a bias voltage of 30.0 V and a temperature of 5 °C. The energy resolution and timing resolution under these conditions were 14.2% ± 0.1% and 4.2 ± 0.1 ns, respectively. The flood histograms show that all the crystals in the 1.5 mm pitch LSO array can be clearly identified and that smaller crystal pitches can also be resolved. Flood histogram quality was also calculated using different center of gravity based positioning algorithms. Improved and more robust results were achieved using the local 9 pixels for positioning along with an energy offset calibration. To evaluate the front-end detector readout, and multiplexing efficiency, an array trigger error metric is introduced and measured at different lower energy thresholds. Using a lower energy threshold greater than 150 keV effectively eliminates any mispositioning between SiPM arrays. In summary, the Matrix9 detector system can resolve high-resolution scintillator arrays common in small-animal PET with adequate energy resolution and timing resolution over a large detector area. The modular design of the Matrix9 detector allows it to be used as a building block for simple, low channel-count, yet high performance, small animal PET or PET/MRI systems.
Evaluation of Matrix9 silicon photomultiplier array for small-animal PET
Du, Junwei; Schmall, Jeffrey P.; Yang, Yongfeng; Di, Kun; Roncali, Emilie; Mitchell, Gregory S.; Buckley, Steve; Jackson, Carl; Cherry, Simon R.
2015-01-01
Purpose: The MatrixSL-9-30035-OEM (Matrix9) from SensL is a large-area silicon photomultiplier (SiPM) photodetector module consisting of a 3 × 3 array of 4 × 4 element SiPM arrays (total of 144 SiPM pixels) and incorporates SensL’s front-end electronics board and coincidence board. Each SiPM pixel measures 3.16 × 3.16 mm2 and the total size of the detector head is 47.8 × 46.3 mm2. Using 8 × 8 polished LSO/LYSO arrays (pitch 1.5 mm) the performance of this detector system (SiPM array and readout electronics) was evaluated with a view for its eventual use in small-animal positron emission tomography (PET). Methods: Measurements of noise, signal, signal-to-noise ratio, energy resolution, flood histogram quality, timing resolution, and array trigger error were obtained at different bias voltages (28.0–32.5 V in 0.5 V intervals) and at different temperatures (5 °C–25 °C in 5 °C degree steps) to find the optimal operating conditions. Results: The best measured signal-to-noise ratio and flood histogram quality for 511 keV gamma photons were obtained at a bias voltage of 30.0 V and a temperature of 5 °C. The energy resolution and timing resolution under these conditions were 14.2% ± 0.1% and 4.2 ± 0.1 ns, respectively. The flood histograms show that all the crystals in the 1.5 mm pitch LSO array can be clearly identified and that smaller crystal pitches can also be resolved. Flood histogram quality was also calculated using different center of gravity based positioning algorithms. Improved and more robust results were achieved using the local 9 pixels for positioning along with an energy offset calibration. To evaluate the front-end detector readout, and multiplexing efficiency, an array trigger error metric is introduced and measured at different lower energy thresholds. Using a lower energy threshold greater than 150 keV effectively eliminates any mispositioning between SiPM arrays. Conclusions: In summary, the Matrix9 detector system can resolve high-resolution scintillator arrays common in small-animal PET with adequate energy resolution and timing resolution over a large detector area. The modular design of the Matrix9 detector allows it to be used as a building block for simple, low channel-count, yet high performance, small animal PET or PET/MRI systems. PMID:25652479
Evaluation of Matrix9 silicon photomultiplier array for small-animal PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Junwei, E-mail: jwdu@ucdavis.edu; Schmall, Jeffrey P.; Yang, Yongfeng
Purpose: The MatrixSL-9-30035-OEM (Matrix9) from SensL is a large-area silicon photomultiplier (SiPM) photodetector module consisting of a 3 × 3 array of 4 × 4 element SiPM arrays (total of 144 SiPM pixels) and incorporates SensL’s front-end electronics board and coincidence board. Each SiPM pixel measures 3.16 × 3.16 mm{sup 2} and the total size of the detector head is 47.8 × 46.3 mm{sup 2}. Using 8 × 8 polished LSO/LYSO arrays (pitch 1.5 mm) the performance of this detector system (SiPM array and readout electronics) was evaluated with a view for its eventual use in small-animal positron emission tomographymore » (PET). Methods: Measurements of noise, signal, signal-to-noise ratio, energy resolution, flood histogram quality, timing resolution, and array trigger error were obtained at different bias voltages (28.0–32.5 V in 0.5 V intervals) and at different temperatures (5 °C–25 °C in 5 °C degree steps) to find the optimal operating conditions. Results: The best measured signal-to-noise ratio and flood histogram quality for 511 keV gamma photons were obtained at a bias voltage of 30.0 V and a temperature of 5 °C. The energy resolution and timing resolution under these conditions were 14.2% ± 0.1% and 4.2 ± 0.1 ns, respectively. The flood histograms show that all the crystals in the 1.5 mm pitch LSO array can be clearly identified and that smaller crystal pitches can also be resolved. Flood histogram quality was also calculated using different center of gravity based positioning algorithms. Improved and more robust results were achieved using the local 9 pixels for positioning along with an energy offset calibration. To evaluate the front-end detector readout, and multiplexing efficiency, an array trigger error metric is introduced and measured at different lower energy thresholds. Using a lower energy threshold greater than 150 keV effectively eliminates any mispositioning between SiPM arrays. Conclusions: In summary, the Matrix9 detector system can resolve high-resolution scintillator arrays common in small-animal PET with adequate energy resolution and timing resolution over a large detector area. The modular design of the Matrix9 detector allows it to be used as a building block for simple, low channel-count, yet high performance, small animal PET or PET/MRI systems.« less
Lateralization of music processing with noises in the auditory cortex: an fNIRS study.
Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik
2014-01-01
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts.
Lateralization of music processing with noises in the auditory cortex: an fNIRS study
Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik
2014-01-01
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts. PMID:25538583
Automatic Methods in Image Processing and Their Relevance to Map-Making.
1981-02-11
23b) and ECfg ) = DC1 1 reIc (5-24) Is an example, let the image function f be white noise so that Cf( ) = s, ,), the Dirac impulse . Then (5-24...based on image and correlator models which describe the behavior of correlation processors under condi- tions of low image contrast or signal-to- noise ...71 Sensor Noise ......................... 74 Self Noise .7.................. 6 Ma chine Noise ................ 81 Fixed Point Processing
Frequency domain phase noise analysis of dual injection-locked optoelectronic oscillators.
Jahanbakht, Sajad
2016-10-01
Dual injection-locked optoelectronic oscillators (DIL-OEOs) have been introduced as a means to achieve very low-noise microwave oscillations while avoiding the large spurious peaks that occur in the phase noise of the conventional single-loop OEOs. In these systems, two OEOs are inter-injection locked to each other. The OEO with the longer optical fiber delay line is called the master OEO, and the other is called the slave OEO. Here, a frequency domain approach for simulating the phase noise spectrum of each of the OEOs in a DIL-OEO system and based on the conversion matrix approach is presented. The validity of the new approach is verified by comparing its results with previously published data in the literature. In the new approach, first, in each of the master or slave OEOs, the power spectral densities (PSDs) of two white and 1/f noise sources are optimized such that the resulting simulated phase noise of any of the master or slave OEOs in the free-running state matches the measured phase noise of that OEO. After that, the proposed approach is able to simulate the phase noise PSD of both OEOs at the injection-locked state. Because of the short run-time requirements, especially compared to previously proposed time domain approaches, the new approach is suitable for optimizing the power injection ratios (PIRs), and potentially other circuit parameters, in order to achieve good performance regarding the phase noise in each of the OEOs. Through various numerical simulations, the optimum PIRs for achieving good phase noise performance are presented and discussed; they are in agreement with the previously published results. This further verifies the applicability of the new approach. Moreover, some other interesting results regarding the spur levels are also presented.
NASA Astrophysics Data System (ADS)
Koniczek, Martin; El-Mohri, Youcef; Antonuk, Larry E.; Liang, Albert; Zhao, Qihua; Jiang, Hao
2011-03-01
A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise. From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual, representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.
NASA Astrophysics Data System (ADS)
Chevalier, Pascal; Oukaci, Abdelkader; Delmas, Jean-Pierre
2011-12-01
The detection of a known signal with unknown parameters in the presence of noise plus interferences (called total noise) whose covariance matrix is unknown is an important problem which has received much attention these last decades for applications such as radar, satellite localization or time acquisition in radio communications. However, most of the available receivers assume a second order (SO) circular (or proper) total noise and become suboptimal in the presence of SO noncircular (or improper) interferences, potentially present in the previous applications. The scarce available receivers which take the potential SO noncircularity of the total noise into account have been developed under the restrictive condition of a known signal with known parameters or under the assumption of a random signal. For this reason, following a generalized likelihood ratio test (GLRT) approach, the purpose of this paper is to introduce and to analyze the performance of different array receivers for the detection of a known signal, with different sets of unknown parameters, corrupted by an unknown noncircular total noise. To simplify the study, we limit the analysis to rectilinear known useful signals for which the baseband signal is real, which concerns many applications.
ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.
Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi
2009-06-15
In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
Murphy, Enda; King, Eoin A
2016-08-15
The strategic noise mapping process of the EU has now been ongoing for more than ten years. However, despite the fact that a significant volume of research has been conducted on the process and related issues there has been little change or innovation in how relevant authorities and policymakers are conducting the process since its inception. This paper reports on research undertaken to assess the possibility for smartphone-based noise mapping data to be integrated into the traditional strategic noise mapping process. We compare maps generated using the traditional approach with those generated using smartphone-based measurement data. The advantage of the latter approach is that it has the potential to remove the need for exhaustive input data into the source calculation model for noise prediction. In addition, the study also tests the accuracy of smartphone-based measurements against simultaneous measurements taken using traditional sound level meters in the field. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kargovsky, A. V.; Chichigina, O. A.; Anashkina, E. I.; Valenti, D.; Spagnolo, B.
2015-10-01
The relaxation dynamics of a system described by a Langevin equation with pulse multiplicative noise sources with different correlation properties is considered. The solution of the corresponding Fokker-Planck equation is derived for Gaussian white noise. Moreover, two pulse processes with regulated periodicity are considered as a noise source: the dead-time-distorted Poisson process and the process with fixed time intervals, which is characterized by an infinite correlation time. We find that the steady state of the system is dependent on the correlation properties of the pulse noise. An increase of the noise correlation causes the decrease of the mean value of the solution at the steady state. The analytical results are in good agreement with the numerical ones.
Kargovsky, A V; Chichigina, O A; Anashkina, E I; Valenti, D; Spagnolo, B
2015-10-01
The relaxation dynamics of a system described by a Langevin equation with pulse multiplicative noise sources with different correlation properties is considered. The solution of the corresponding Fokker-Planck equation is derived for Gaussian white noise. Moreover, two pulse processes with regulated periodicity are considered as a noise source: the dead-time-distorted Poisson process and the process with fixed time intervals, which is characterized by an infinite correlation time. We find that the steady state of the system is dependent on the correlation properties of the pulse noise. An increase of the noise correlation causes the decrease of the mean value of the solution at the steady state. The analytical results are in good agreement with the numerical ones.
Communication system with adaptive noise suppression
NASA Technical Reports Server (NTRS)
Kozel, David (Inventor); Devault, James A. (Inventor); Birr, Richard B. (Inventor)
2007-01-01
A signal-to-noise ratio dependent adaptive spectral subtraction process eliminates noise from noise-corrupted speech signals. The process first pre-emphasizes the frequency components of the input sound signal which contain the consonant information in human speech. Next, a signal-to-noise ratio is determined and a spectral subtraction proportion adjusted appropriately. After spectral subtraction, low amplitude signals can be squelched. A single microphone is used to obtain both the noise-corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoiced frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Spectral subtraction may be performed on a composite noise-corrupted signal, or upon individual sub-bands of the noise-corrupted signal. Pre-averaging of the input signal's magnitude spectrum over multiple time frames may be performed to reduce musical noise.
Detection in fixed and random noise in foveal and parafoveal vision explained by template learning
NASA Technical Reports Server (NTRS)
Beard, B. L.; Ahumada, A. J. Jr; Watson, A. B. (Principal Investigator)
1999-01-01
Foveal and parafoveal contrast detection thresholds for Gabor and checkerboard targets were measured in white noise by means of a two-interval forced-choice paradigm. Two white-noise conditions were used: fixed and twin. In the fixed noise condition a single noise sample was presented in both intervals of all the trials. In the twin noise condition the same noise sample was used in the two intervals of a trial, but a new sample was generated for each trial. Fixed noise conditions usually resulted in lower thresholds than twin noise. Template learning models are presented that attribute this advantage of fixed over twin noise either to fixed memory templates' reducing uncertainty by incorporation of the noise or to the introduction, by the learning process itself, of more variability in the twin noise condition. Quantitative predictions of the template learning process show that it contributes to the accelerating nonlinear increase in performance with signal amplitude at low signal-to-noise ratios.
Seismoelectric data processing for surface surveys of shallow targets
Haines, S.S.; Guitton, A.; Biondi, B.
2007-01-01
The utility of the seismoelectric method relies on the development of methods to extract the signal of interest from background and source-generated coherent noise that may be several orders-of-magnitude stronger. We compare data processing approaches to develop a sequence of preprocessing and signal/noise separation and to quantify the noise level from which we can extract signal events. Our preferred sequence begins with the removal of power line harmonic noise and the use of frequency filters to minimize random and source-generated noise. Mapping to the linear Radon domain with an inverse process incorporating a sparseness constraint provides good separation of signal from noise, though it is ineffective on noise that shows the same dip as the signal. Similarly, the seismoelectric signal and noise do not separate cleanly in the Fourier domain, so f-k filtering can not remove all of the source-generated noise and it also disrupts signal amplitude patterns. We find that prediction-error filters provide the most effective method to separate signal and noise, while also preserving amplitude information, assuming that adequate pattern models can be determined for the signal and noise. These Radon-domain and prediction-error-filter methods successfully separate signal from <33 dB stronger noise in our test data. ?? 2007 Society of Exploration Geophysicists.
The Effects of Syntactic Complexity on Processing Sentences in Noise
ERIC Educational Resources Information Center
Carroll, Rebecca; Ruigendijk, Esther
2013-01-01
This paper discusses the influence of stationary (non-fluctuating) noise on processing and understanding of sentences, which vary in their syntactic complexity (with the factors canonicity, embedding, ambiguity). It presents data from two RT-studies with 44 participants testing processing of German sentences in silence and in noise. Results show a…
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Optimization and Analysis of Laser Beam Machining Parameters for Al7075-TiB2 In-situ Composite
NASA Astrophysics Data System (ADS)
Manjoth, S.; Keshavamurthy, R.; Pradeep Kumar, G. S.
2016-09-01
The paper focuses on laser beam machining (LBM) of In-situ synthesized Al7075-TiB2 metal matrix composite. Optimization and influence of laser machining process parameters on surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy of composites were studied. Al7075-TiB2 metal matrix composite was synthesized by in-situ reaction technique using stir casting process. Taguchi's L9 orthogonal array was used to design experimental trials. Standoff distance (SOD) (0.3 - 0.5mm), Cutting Speed (1000 - 1200 m/hr) and Gas pressure (0.5 - 0.7 bar) were considered as variable input parameters at three different levels, while power and nozzle diameter were maintained constant with air as assisting gas. Optimized process parameters for surface roughness, volumetric material removal rate (VMRR) and dimensional accuracy were calculated by generating the main effects plot for signal noise ratio (S/N ratio) for surface roughness, VMRR and dimensional error using Minitab software (version 16). The Significant of standoff distance (SOD), cutting speed and gas pressure on surface roughness, volumetric material removal rate (VMRR) and dimensional error were calculated using analysis of variance (ANOVA) method. Results indicate that, for surface roughness, cutting speed (56.38%) is most significant parameter followed by standoff distance (41.03%) and gas pressure (2.6%). For volumetric material removal (VMRR), gas pressure (42.32%) is most significant parameter followed by cutting speed (33.60%) and standoff distance (24.06%). For dimensional error, Standoff distance (53.34%) is most significant parameter followed by cutting speed (34.12%) and gas pressure (12.53%). Further, verification experiments were carried out to confirm performance of optimized process parameters.
Porta, Tiffany; Grivet, Chantal; Knochenmuss, Richard; Varesio, Emmanuel; Hopfgartner, Gérard
2011-02-01
Analysis of low molecular weight compounds (LMWC) in complex matrices by vacuum matrix-assisted laser desorption/ionization (MALDI) often suffers from matrix interferences, which can severely degrade limits of quantitation. It is, therefore, useful to have available a range of suitable matrices, which exhibit complementary regions of interference. Two newly synthesized α-cyanocinnamic acid derivatives are reported here; (E)-2-cyano-3-(naphthalen-2-yl)acrylic acid (NpCCA) and (2E)-3-(anthracen-9-yl)-2-cyanoprop-2enoic acid (AnCCA). Along with the commonly used α-cyano-4-hydroxycinnamic acid (CHCA), and the recently developed 4-chloro-α-cyanocinnamic acid (Cl-CCA) matrices, these constitute a chemically similar series of matrices covering a range of molecular weights, and with correspondingly differing ranges of spectral interference. Their performance was compared by measuring the signal-to-noise ratios (S/N) of 47 analytes, mostly pharmaceuticals, with the different matrices using the selected reaction monitoring (SRM) mode on a triple quadrupole instrument equipped with a vacuum MALDI source. AnCCA, NpCCA and Cl-CCA were found to offer better signal-to-noise ratios in SRM mode than CHCA, but Cl-CCA yielded the best results for 60% of the compounds tested. To better understand the relative performance of this matrix series, the proton affinities (PAs) were measured using the kinetic method. Their relative values were: AnCCA > CHCA > NpCCA > Cl-CCA. This ordering is consistent with the performance data. The synthesis of the new matrices is straightforward and they provide (1) tunability of matrix background interfering ions and (2) enhanced analyte response for certain classes of compounds. Copyright © 2011 John Wiley & Sons, Ltd.
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
On the regularity of the covariance matrix of a discretized scalar field on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilbao-Ahedo, J.D.; Barreiro, R.B.; Herranz, D.
2017-02-01
We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizationsmore » that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.« less
Random matrix theory and fund of funds portfolio optimisation
NASA Astrophysics Data System (ADS)
Conlon, T.; Ruskin, H. J.; Crane, M.
2007-08-01
The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.
Stegger, Lars; Martirosian, Petros; Schwenzer, Nina; Bisdas, Sotirios; Kolb, Armin; Pfannenberg, Christina; Claussen, Claus D; Pichler, Bernd; Schick, Fritz; Boss, Andreas
2012-11-01
Hybrid positron emission tomography/magnetic resonance imaging (PET/MRI) with simultaneous data acquisition promises a comprehensive evaluation of cerebral pathophysiology on a molecular, anatomical, and functional level. Considering the necessary changes to the MR scanner design the feasibility of arterial spin labeling (ASL) is unclear. To evaluate whether cerebral blood flow imaging with ASL is feasible using a prototype PET/MRI device. ASL imaging of the brain with Flow-sensitive Alternating Inversion Recovery (FAIR) spin preparation and true fast imaging in steady precession (TrueFISP) data readout was performed in eight healthy volunteers sequentially on a prototype PET/MRI and a stand-alone MR scanner with 128 × 128 and 192 × 192 matrix sizes. Cerebral blood flow values for gray matter, signal-to-noise and contrast-to-noise ratios, and relative signal change were compared. Additionally, the feasibility of ASL as part of a clinical hybrid PET/MRI protocol was demonstrated in five patients with intracerebral tumors. Blood flow maps showed good delineation of gray and white matter with no discernible artifacts. The mean blood flow values of the eight volunteers on the PET/MR system were 51 ± 9 and 51 ± 7 mL/100 g/min for the 128 × 128 and 192 × 192 matrices (stand-alone MR, 57 ± 2 and 55 ± 5, not significant). The value for signal-to-noise (SNR) was significantly higher for the PET/MRI system using the 192 × 192 matrix size (P < 0.01), the relative signal change (δS) was significantly lower for the 192 × 192 matrix size (P = 0.02). ASL imaging as part of a clinical hybrid PET/MRI protocol could successfully be accomplished in all patients in diagnostic image quality. ASL brain imaging is feasible with a prototype hybrid PET/MRI scanner, thus adding to the value of this novel imaging technique.
Reconstructing Images in Astrophysics, an Inverse Problem Point of View
NASA Astrophysics Data System (ADS)
Theys, Céline; Aime, Claude
2016-04-01
After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter, a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimate spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an Extended Kalman Filter. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
1998-01-01
An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter (EKF), a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimates spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an EKF. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.
Targeting functional motifs of a protein family
NASA Astrophysics Data System (ADS)
Bhadola, Pradeep; Deo, Nivedita
2016-10-01
The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.
Experimental testing of the noise-canceling processor.
Collins, Michael D; Baer, Ralph N; Simpson, Harry J
2011-09-01
Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
Quantitative Aspects of Single Molecule Microscopy
Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally
2015-01-01
Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102
NASA Astrophysics Data System (ADS)
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
The system-resonance approach in modeling genetic structures.
Petoukhov, Sergey V
2016-01-01
The founder of the theory of resonance in structural chemistry Linus Pauling established the importance of resonance patterns in organization of living systems. Any living organism is a great chorus of coordinated oscillatory processes. From the formal point of view, biological organism is an oscillatory system with a great number of degrees of freedom. Such systems are studied in the theory of oscillations using matrix mathematics of their resonance characteristics. This study is devoted to a new approach for modeling genetically inherited structures and processes in living organisms using mathematical tools of the theory of resonances. This approach reveals hidden relationships in a number of genetic phenomena and gives rise to a new class of bio-mathematical models, which contribute to a convergence of biology with physics and informatics. In addition some relationships of molecular-genetic ensembles with mathematics of noise-immunity coding of information in modern communications technology are shown. Perspectives of applications of the phenomena of vibrational mechanics for modeling in biology are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions
NASA Astrophysics Data System (ADS)
Güler, Marifi
2017-10-01
Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.
Phase retrieval in annulus sector domain by non-iterative methods
NASA Astrophysics Data System (ADS)
Wang, Xiao; Mao, Heng; Zhao, Da-zun
2008-03-01
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.
Hybrid colored noise process with space-dependent switching rates
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Lawley, Sean D.
2017-07-01
A fundamental issue in the theory of continuous stochastic process is the interpretation of multiplicative white noise, which is often referred to as the Itô-Stratonovich dilemma. From a physical perspective, this reflects the need to introduce additional constraints in order to specify the nature of the noise, whereas from a mathematical perspective it reflects an ambiguity in the formulation of stochastic differential equations (SDEs). Recently, we have identified a mechanism for obtaining an Itô SDE based on a form of temporal disorder. Motivated by switching processes in molecular biology, we considered a Brownian particle that randomly switches between two distinct conformational states with different diffusivities. In each state, the particle undergoes normal diffusion (additive noise) so there is no ambiguity in the interpretation of the noise. However, if the switching rates depend on position, then in the fast switching limit one obtains Brownian motion with a space-dependent diffusivity of the Itô form. In this paper, we extend our theory to include colored additive noise. We show that the nature of the effective multiplicative noise process obtained by taking both the white-noise limit (κ →0 ) and fast switching limit (ɛ →0 ) depends on the order the two limits are taken. If the white-noise limit is taken first, then we obtain Itô, and if the fast switching limit is taken first, then we obtain Stratonovich. Moreover, the form of the effective diffusion coefficient differs in the two cases. The latter result holds even in the case of space-independent transition rates, where one obtains additive noise processes with different diffusion coefficients. Finally, we show that yet another form of multiplicative noise is obtained in the simultaneous limit ɛ ,κ →0 with ɛ /κ2 fixed.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
I. Advances in NMR Signal Processing. II. Spin Dynamics in Quantum Dissipative Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yung-Ya
1998-11-01
Part I. Advances in IVMR Signal Processing. Improvements of sensitivity and resolution are two major objects in the development of NMR/MRI. A signal enhancement method is first presented which recovers signal from noise by a judicious combination of a priordmowledge to define the desired feasible solutions and a set theoretic estimation for restoring signal properties that have been lost due to noise contamination. The effect of noise can be significantly mitigated through the process of iteratively modifying the noisy data set to the smallest degree necessary so that it possesses a collection of prescribed properties and also lies closest tomore » the original data set. A novel detection-estimation scheme is then introduced to analyze noisy and/or strongly damped or truncated FIDs. Based on exponential modeling, the number of signals is detected based on information estimated using the matrix pencil method. theory and the spectral parameters are Part II. Spin Dynamics in body dipole-coupled systems Quantum Dissipative Systems. Spin dynamics in manyconstitutes one of the most fundamental problems in magnetic resonance and condensed-matter physics. Its many-spin nature precludes any rigorous treatment. ‘Therefore, the spin-boson model is adopted to describe in the rotating frame the influence of the dipolar local fields on a tagged spin. Based on the polaronic transform and a perturbation treatment, an analytical solution is derived, suggesting the existence of self-trapped states in the. strong coupling limit, i.e., when transverse local field >> longitudinal local field. Such nonlinear phenomena originate from the joint action of the lattice fluctuations and the reaction field. Under semiclassical approximation, it is found that the main effect of the reaction field is the renormalization of the Hamiltonian of interest. Its direct consequence is the two-step relaxation process: the spin is initially localized in a quasiequilibrium state, which is later detrapped by the lattice fluctuations in an extended time scale. Lowtemperature measurements and classical-spin simulations are carried out to verify the above analysis. To promote the implementation and future study on the topics described in this thesis, program packages of advanced NMR signal processing and many-spin FID simulations are summarized and listed in the Appendix.« less
NASA Astrophysics Data System (ADS)
Alfisyahrin; Isranuri, I.
2018-02-01
Active Noise Control is a technique to overcome noisy with noise or sound countered with sound in scientific terminology i.e signal countered with signals. This technique can be used to dampen relevant noise in accordance with the wishes of the engineering task and reducing automotive muffler noise to a minimum. Objective of this study is to develop a Active Noise Control which should cancel the noise of automotive Exhaust (Silencer) through Signal Processing Simulation methods. Noise generator of Active Noise Control is to make the opponent signal amplitude and frequency of the automotive noise. The steps are: Firstly, the noise of automotive silencer was measured to characterize the automotive noise that its amplitude and frequency which intended to be expressed. The opposed sound which having similar character with the signal source should be generated by signal function. A comparison between the data which has been completed with simulation calculations Fourier transform field data is data that has been captured on the muffler (noise silencer) Toyota Kijang Capsule assembly 2009. MATLAB is used to simulate how the signal processing noise generated by exhaust (silencer) using FFT. This opponent is inverted phase signal from the signal source 180° conducted by Instruments of Signal Noise Generators. The process of noise cancelation examined through simulation using computer software simulation. The result is obtained that attenuation of sound (noise cancellation) has a difference of 33.7%. This value is obtained from the comparison of the value of the signal source and the signal value of the opponent. So it can be concluded that the noisy signal can be attenuated by 33.7%.
Cheng, Liang; Wang, Shao-Hui; Peng, Kang; Liao, Xiao-Mei
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.
Cheng, Liang; Wang, Shao-Hui; Peng, Kang
2017-01-01
Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons. PMID:28589040
Optimization of valve opening process for the suppression of impulse exhaust noise
NASA Astrophysics Data System (ADS)
Li, Jingxiang; Zhao, Shengdun
2017-02-01
Impulse exhaust noise generated by the sudden impact of discharging flow of pneumatic systems has significant temporal characteristics including high sound pressure and rapid sound transient. The impulse noise exposures are more hazardous to hearing than the energy equivalent uniform noise exposures. This paper presents a novel approach to suppress the peak sound pressure as a major indicator of impulsiveness of the impulse exhaust noise by an optimization of the opening process of valve. Relationships between exhaust flow and impulse noise are described by thermodynamics and noise generating mechanism. Then an optimized approach by controlling the valve opening process is derived under a constraint of pre-setting exhaust time. A modified servo-direct-driven valve was designed and assembled in a typical pneumatic system for the verification experiments comparing with an original solenoid valve. Experimental results with groups of initial cylinder pressures and pre-setting exhaust times are shown to verify the effects of the proposed optimization. Some indicators of energy-equivalent and impulsiveness are introduced to discuss the effects of the noise suppressions. Relationship between noise reduction and exhaust time delay is also discussed.
Wang, Zhi; Liang, Jiabin; Rong, Xing; Zhou, Hao; Duan, Chuanwei; Du, Weijia; Liu, Yimin
2015-12-01
To investigate noise hazard and its influence on hearing loss in workers in the automotive component manufacturing industry. Noise level in the workplace of automotive component manufacturing enterprises was measured and hearing examination was performed for workers to analyze the features and exposure levels of noise in each process, as well as the influence on hearing loss in workers. In the manufacturing processes for different products in this industry, the manufacturing processes of automobile hub and suspension and steering systems had the highest degrees of noise hazard, with over-standard rates of 79.8% and 57.1%, respectively. In the different technical processes for automotive component manufacturing, punching and casting had the highest degrees of noise hazard, with over-standard rates of 65.0% and 50%, respectively. The workers engaged in the automotive air conditioning system had the highest rate of abnormal hearing ability (up to 3.1%). In the automotive component manufacturing industry, noise hazard exceeds the standard seriously. Although the rate of abnormal hearing is lower than the average value of the automobile manufacturing industry in China, this rate tends to increase gradually. Enough emphasis should be placed on the noise hazard in this industry.
Point process analysis of noise in early invertebrate vision
Vinnicombe, Glenn
2017-01-01
Noise is a prevalent and sometimes even dominant aspect of many biological processes. While many natural systems have adapted to attenuate or even usefully integrate noise, the variability it introduces often still delimits the achievable precision across biological functions. This is particularly so for visual phototransduction, the process responsible for converting photons of light into usable electrical signals (quantum bumps). Here, randomness of both the photon inputs (regarded as extrinsic noise) and the conversion process (intrinsic noise) are seen as two distinct, independent and significant limitations on visual reliability. Past research has attempted to quantify the relative effects of these noise sources by using approximate methods that do not fully account for the discrete, point process and time ordered nature of the problem. As a result the conclusions drawn from these different approaches have led to inconsistent expositions of phototransduction noise performance. This paper provides a fresh and complete analysis of the relative impact of intrinsic and extrinsic noise in invertebrate phototransduction using minimum mean squared error reconstruction techniques based on Bayesian point process (Snyder) filters. An integrate-fire based algorithm is developed to reliably estimate photon times from quantum bumps and Snyder filters are then used to causally estimate random light intensities both at the front and back end of the phototransduction cascade. Comparison of these estimates reveals that the dominant noise source transitions from extrinsic to intrinsic as light intensity increases. By extending the filtering techniques to account for delays, it is further found that among the intrinsic noise components, which include bump latency (mean delay and jitter) and shape (amplitude and width) variance, it is the mean delay that is critical to noise performance. As the timeliness of visual information is important for real-time action, this delay could potentially limit the speed at which invertebrates can respond to stimuli. Consequently, if one wants to increase visual fidelity, reducing the photoconversion lag is much more important than improving the regularity of the electrical signal. PMID:29077703
Wang, Wanping; Shao, Limin; Yuan, Bin; Zhang, Xu; Liu, Maili
2018-08-31
The number of chemical species is crucial in analyzing pulsed field gradient nuclear magnetic resonance spectral data. Any method to determine the number must handle the obstacles of collinearity and noise. Collinearity in pulsed field gradient NMR data poses a serious challenge to and fails many existing methods. A novel method is proposed by taking advantage of the two obstacles instead of eliminating them. In the proposed method, the determination is based on discriminating decay-profile-dominant eigenvectors from noise-dominant ones, and the discrimination is implemented with a novel low- and high-frequency energy ratio (LHFER). Its performance is validated with both simulated and experimental data. The method is mathematically rigorous, computationally efficient, and readily automated. It also has the potential to be applied to other types of data in which collinearity is fairly severe. Copyright © 2018 Elsevier B.V. All rights reserved.
Compressive spherical beamforming for localization of incipient tip vortex cavitation.
Choo, Youngmin; Seong, Woojae
2016-12-01
Noises by incipient propeller tip vortex cavitation (TVC) are generally generated at regions near the propeller tip. Localization of these sparse noises is performed using compressive sensing (CS) with measurement data from cavitation tunnel experiments. Since initial TVC sound radiates in all directions as a monopole source, a sensing matrix for CS is formulated by adopting spherical beamforming. CS localization is examined with known source acoustic measurements, where the CS estimated source position coincides with the known source position. Afterwards, CS is applied to initial cavitation noise cases. The result of cavitation localization was detected near the upper downstream area of the propeller and showed less ambiguity compared to Bartlett spherical beamforming. Standard constraint in CS was modified by exploiting the physical features of cavitation to suppress remaining ambiguity. CS localization of TVC using the modified constraint is shown according to cavitation numbers and compared to high-speed camera images.
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
Robust modular product family design
NASA Astrophysics Data System (ADS)
Jiang, Lan; Allada, Venkat
2001-10-01
This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.
NASA Astrophysics Data System (ADS)
Zeng, Zhoufang; Wang, Yandong; Guo, Xinhua; Wang, Ling; Lu, Nan
2014-05-01
A hydrophobic-hydrophilic-hydrophobic pattern has been produced on the surface of a silicon substrate for selective enrichment, self-desalting, and matrix-free analysis of peptides in a single step. Upon sample application, the sample solution is first confined in a small area by a hydrophobic F-SAM outer area, after which salt contaminants and peptides are selectively enriched in the hydrophilic and hydrophobic areas, respectively. Simultaneously, matrix background noise is significantly reduced or eliminated because of immobilization of matrix molecules. As a result, the detection sensitivity is enhanced 20-fold compared with that obtained using the usual MALDI plate, and interference-free detection is achieved in the low m/z range. In addition, peptide ions can be identified unambiguously in the presence of NH4HCO3 (100 mM), urea (1 M), and NaCl (1 M). When the device was applied to the analysis of BSA digests, the peptide recovery and protein identification confidence were greatly improved.
Quantum confinement of nanocrystals within amorphous matrices
NASA Astrophysics Data System (ADS)
Lusk, Mark T.; Collins, Reuben T.; Nourbakhsh, Zahra; Akbarzadeh, Hadi
2014-02-01
Nanocrystals encapsulated within an amorphous matrix are computationally analyzed to quantify the degree to which the matrix modifies the nature of their quantum-confinement power—i.e., the relationship between nanocrystal size and the gap between valence- and conduction-band edges. A special geometry allows exactly the same amorphous matrix to be applied to nanocrystals of increasing size to precisely quantify changes in confinement without the noise typically associated with encapsulating structures that are different for each nanocrystal. The results both explain and quantify the degree to which amorphous matrices redshift the character of quantum confinement. The character of this confinement depends on both the type of encapsulating material and the separation distance between the nanocrystals within it. Surprisingly, the analysis also identifies a critical nanocrystal threshold below which quantum confinement is not possible—a feature unique to amorphous encapsulation. Although applied to silicon nanocrystals within an amorphous silicon matrix, the methodology can be used to accurately analyze the confinement softening of other amorphous systems as well.
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-01-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of the auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we use structural equation modeling to evaluate interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55 to 79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. PMID:23541911
Scalable non-negative matrix tri-factorization.
Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž
2017-01-01
Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Noise removal in extended depth of field microscope images through nonlinear signal processing.
Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J
2013-04-01
Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.
Density-matrix description of heteronuclear decoupling in A mX n systems
NASA Astrophysics Data System (ADS)
McClung, R. E. D.; John, Boban K.
A detailed investigation of the effects of ordinary noise decoupling and spherical randomization decoupling on the elements of the density matrix for A mX n spin systems is presented. The elements are shown to reach steady-state values in the rotating frame of the decoupled nuclei when the decoupling field is strong and is applied for a sufficient time interval. The steady-state values are found to be linear combinations of the density-matrix elements at the beginning of the decoupling period, and often involve mixing of populations with multiple-quantum coherences, and mixing of the perpendicular components of the magnetization with higher coherences. This description of decoupling is shown to account for the "illusions" of spin decoupling in 2D gated-decoupler 13C J-resolved spectra reported by Levitt et al.
Modeling Distributions of Non-Coherent Integration Sidelobes
2010-03-01
we can write 2 † ,n nC AX X (4.3) where the matrix A is an outer product of the real coefficients derived from the j and “ † ” denotes...c ie n t Desired C/A: In-phase only Intrf Intrf + Noise © The MITRE Corporation. All rights reserved 5. NON-COHERENT INTERGRATION MODELING
LES tests on airfoil trailing edge serration
NASA Astrophysics Data System (ADS)
Zhu, Wei Jun; Shen, Wen Zhong
2016-09-01
In the present study, a large number of acoustic simulations are carried out for a low noise airfoil with different Trailing Edge Serrations (TES). The Ffowcs Williams-Hawkings (FWH) acoustic analogy is used for noise prediction at trailing edge. The acoustic solver is running on the platform of our in-house incompressible flow solver EllipSys3D. The flow solution is first obtained from the Large Eddy Simulation (LES), the acoustic part is then carried out based on the instantaneous hydrodynamic pressure and velocity field. To obtain the time history data of sound pressure, the flow quantities are integrated around the airfoil surface through the FWH approach. For all the simulations, the chord based Reynolds number is around 1.5x106. In the test matrix, the effects from angle of attack, the TE flap angle, the length/width of the TES are investigated. Even though the airfoil under investigation is already optimized for low noise emission, most numerical simulations and wind tunnel experiments show that the noise level is further decreased by adding the TES device.
Time-reversal optical tomography: detecting and locating extended targets in a turbid medium
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Xu, M.; Gayen, S. K.
2012-03-01
Time Reversal Optical Tomography (TROT) is developed to locate extended target(s) in a highly scattering turbid medium, and estimate their optical strength and size. The approach uses Diffusion Approximation of Radiative Transfer Equation for light propagation along with Time Reversal (TR) Multiple Signal Classification (MUSIC) scheme for signal and noise subspaces for assessment of target location. A MUSIC pseudo spectrum is calculated using the eigenvectors of the TR matrix T, whose poles provide target locations. Based on the pseudo spectrum contours, retrieval of target size is modeled as an optimization problem, using a "local contour" method. The eigenvalues of T are related to optical strengths of targets. The efficacy of TROT to obtain location, size, and optical strength of one absorptive target, one scattering target, and two absorptive targets, all for different noise levels was tested using simulated data. Target locations were always accurately determined. Error in optical strength estimates was small even at 20% noise level. Target size and shape were more sensitive to noise. Results from simulated data demonstrate high potential for application of TROT in practical biomedical imaging applications.
NASA Astrophysics Data System (ADS)
Young, Frederic; Siegel, Edward
Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Hu, Bo Hua; Cai, Qunfeng; Hu, Zihua; Patel, Minal; Bard, Jonathan; Jamison, Jennifer; Coling, Donald
2012-01-01
Matrix metalloproteinases (MMPs) and their related gene products regulate essential cellular functions. An imbalance in MMPs has been implicated in various neurological disorders, including traumatic injuries. Here, we report a role for MMPs and their related gene products in the modulation of cochlear responses to acoustic trauma in rats. The normal cochlea was shown to be enriched in MMP enzymatic activity, and this activity was reduced in a time-dependent fashion after traumatic noise injury. The analysis of gene expression by RNA-seq and qRT-PCR revealed the differential expression of MMPs and their related genes between functionally specialized regions of the sensory epithelium. The expression of these genes was dynamically regulated between the acute and chronic phases of noise-induced hearing loss. Moreover, noise-induced expression changes in two endogenous MMP inhibitors, Timp1 and Timp2, in sensory cells were dependent upon the stage of nuclear condensation, suggesting a specific role for MMP activity in sensory cell apoptosis. A short-term application of doxycycline, a broad-spectrum inhibitor of MMPs, prior to noise exposure reduced noise-induced hearing loss and sensory cell death. By contrast, a 7-day treatment compromised hearing sensitivity and potentiated noise-induced hearing loss. This detrimental effect of the long-term inhibition of MMPs on noise-induced hearing loss was further confirmed using targeted Mmp7 knockout mice. Together, these observations suggest that MMPs and their related genes participate in the regulation of cochlear responses to acoustic overstimulation and that the modulation of MMP activity can serve as a novel therapeutic target for the reduction of noise-induced cochlear damage. PMID:23100416
NASA Astrophysics Data System (ADS)
Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.
Pumped shot noise in adiabatically modulated graphene-based double-barrier structures.
Zhu, Rui; Lai, Maoli
2011-11-16
Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.
Pumped shot noise in adiabatically modulated graphene-based double-barrier structures
NASA Astrophysics Data System (ADS)
Zhu, Rui; Lai, Maoli
2011-11-01
Quantum pumping processes are accompanied by considerable quantum noise. Based on the scattering approach, we investigated the pumped shot noise properties in adiabatically modulated graphene-based double-barrier structures. It is found that compared with the Poisson processes, the pumped shot noise is dramatically enhanced where the dc pumped current changes flow direction, which demonstrates the effect of the Klein paradox.
ERIC Educational Resources Information Center
Hollander, Cara; de Andrade, Victor Manuel
2014-01-01
Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…
External Acoustic Liners for Multi-Functional Aircraft Noise Reduction
NASA Technical Reports Server (NTRS)
Jones, Michael G. (Inventor); Czech, Michael J. (Inventor); Howerton, Brian M. (Inventor); Thomas, Russell H. (Inventor); Nark, Douglas M. (Inventor)
2017-01-01
Acoustic liners for aircraft noise reduction include one or more chambers that are configured to provide a pressure-release surface such that the engine noise generation process is inhibited and/or absorb sound by converting the sound into heat energy. The size and shape of the chambers can be selected to inhibit the noise generation process and/or absorb sound at selected frequencies.
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL stratospheric ozone DIAL located at Mauna Loa Observatory, Hawai'i.
Quantum properties of light emitted by dipole nano-laser
NASA Astrophysics Data System (ADS)
Ghannam, Talal
Recent technological advances allow entire optical systems to be lithographically implanted on small silicon chips. These systems include tiny semiconductor lasers that function as light sources for digital optical signals. Future advances will rely on even smaller components. At the theoretical limit of this process, the smallest lasers will have an active medium consisting of a single atom (natural or artificial). Several suggestions for how this can be accomplished have already been published, such as nano-lasers based on photonic crystals and nano wires. In particular, the "dipole nanolaser" consists of a single quantum dot functioning as the active medium. It is optically coupled to a metal nanoparticles that form a resonant cavity. Laser light is generated from the near-field optical signal. The proposed work is a theoretical exploration of the nature of the resulting laser light. The dynamics of the system will be studied and relevant time scales described. These will form the basis for a set of operator equations describing the quantum properties of the emitted light. The dynamics will be studied in both density matrix and quantum Langevin formulations, with attention directed to noise sources. The equations will be linearized and solved using standard techniques. The result of the study will be a set of predicted noise spectra describing the statistics of the emitted light. The goal will be to identify the major noise contributions and suggest methods for suppressing them. This will be done by studying the probability of getting squeezed light from the nanoparticle for the certain scheme of parameters.
Pump RIN-induced impairments in unrepeatered transmission systems using distributed Raman amplifier.
Cheng, Jingchi; Tang, Ming; Lau, Alan Pak Tao; Lu, Chao; Wang, Liang; Dong, Zhenhua; Bilal, Syed Muhammad; Fu, Songnian; Shum, Perry Ping; Liu, Deming
2015-05-04
High spectral efficiency modulation format based unrepeatered transmission systems using distributed Raman amplifier (DRA) have attracted much attention recently. To enhance the reach and optimize system performance, careful design of DRA is required based on the analysis of various types of impairments and their balance. In this paper, we study various pump RIN induced distortions on high spectral efficiency modulation formats. The vector theory of both 1st and higher-order stimulated Raman scattering (SRS) effect using Jones-matrix formalism is presented. The pump RIN will induce three types of distortion on high spectral efficiency signals: intensity noise stemming from SRS, phase noise stemming from cross phase modulation (XPM), and polarization crosstalk stemming from cross polarization modulation (XPolM). An analytical model for the statistical property of relative phase noise (RPN) in higher order DRA without dealing with complex vector theory is derived. The impact of pump RIN induced impairments are analyzed in polarization-multiplexed (PM)-QPSK and PM-16QAM-based unrepeatered systems simulations using 1st, 2nd and 3rd-order forward pumped Raman amplifier. It is shown that at realistic RIN levels, negligible impairments will be induced to PM-QPSK signals in 1st and 2nd order DRA, while non-negligible impairments will occur in 3rd order case. PM-16QAM signals suffer more penalties compared to PM-QPSK with the same on-off gain where both 2nd and 3rd order DRA will cause non-negligible performance degradations. We also investigate the performance of digital signal processing (DSP) algorithms to mitigate such impairments.
Optics measurement and correction for the Relativistic Heavy Ion Collider
NASA Astrophysics Data System (ADS)
Shen, Xiaozhe
The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
NASA Astrophysics Data System (ADS)
Stinson, Michael R.
2003-10-01
Our world continues to be a noisy place and the challenge to ``increase and diffuse knowledge of noise propagation, passive and active noise control, and the effects of noise'' remains. In the last several years, noise in the classroom has emerged as one of the hotter topics: Considerable progress has been made in the underpinning research, the formulation of recommendations, and the process of educating society on the social and personal impact of inadequate acoustical conditions in classrooms. The establishment of the ANSI S12.60-2002 standard for classroom acoustics was a milestone event. Noise in cities and the understanding of our soundscapes are subjects of ongoing significance. The development of standards and regulations is a continuing process, with urban community noise regulations, aviation noise, and the preservation of natural quiet in national parks being of current concern. New methods to reduce noise are under development and include passive and active methods of noise control, techniques for modeling the performance of noise barriers, and approaches for designing product sound quality.
A Comparison of seismic instrument noise coherence analysis techniques
Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.
2011-01-01
The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.
Inpainting approaches to fill in detector gaps in phase contrast computed tomography
NASA Astrophysics Data System (ADS)
Brun, F.; Delogu, P.; Longo, R.; Dreossi, D.; Rigon, L.
2018-01-01
Photon counting semiconductor detectors in radiation imaging present attractive properties, such as high efficiency, low noise, and energy sensitivity. The very complex electronics limits the sensitive area of current devices to a few square cm. This disadvantage is often compensated by tiling a larger matrix with an adequate number of detector units but this usually results in non-negligible insensitive gaps between two adjacent modules. When considering the case of Computed Tomography (CT), these gaps lead to degraded reconstructed images with severe streak and ring artifacts. This work presents two digital image processing solutions to fill in these gaps when considering the specific case of synchrotron radiation x-ray parallel beam phase contrast CT. While not discussed with experimental data, other CT modalities, such as spectral, cone beam and other geometries might benefit from the presented approaches.
Speech enhancement on smartphone voice recording
NASA Astrophysics Data System (ADS)
Tris Atmaja, Bagus; Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation.
A study of poultry processing plant noise characteristics and potential noise control techniques
NASA Technical Reports Server (NTRS)
Wyvill, J. C.; Jape, A. D.; Moriarity, L. J.; Atkins, R. D.
1980-01-01
The noise environment in a typical poultry processing plant was characterized by developing noise contours for two representative plants: Central Soya of Athens, Inc., Athens, Georgia, and Tip Top Poultry, Inc., Marietta, Georgia. Contour information was restricted to the evisceration are of both plants because nearly 60 percent of all process employees are stationed in this area during a normal work shift. Both plant evisceration areas were composed of tile walls, sheet metal ceilings, and concrete floors. Processing was performed in an assembly-line fashion in which the birds travel through the area on overhead shackles while personnel remain at fixed stations. Processing machinery was present throughout the area. In general, the poultry processing noise problem is the result of loud sources and reflective surfaces. Within the evisceration area, it can be concluded that only a few major sources (lung guns, a chiller component, and hock cutters) are responsible for essentially all direct and reverberant sound pressure levels currently observed during normal operations. Consequently, any effort to reduce the noise problem must first address the sound power output of these sources and/or the absorptive qualitities of the room.
Thermal noise limit for ultra-high vacuum noncontact atomic force microscopy
Lübbe, Jannis; Temmen, Matthias; Rode, Sebastian; Rahe, Philipp; Kühnle, Angelika
2013-01-01
Summary The noise of the frequency-shift signal Δf in noncontact atomic force microscopy (NC-AFM) consists of cantilever thermal noise, tip–surface-interaction noise and instrumental noise from the detection and signal processing systems. We investigate how the displacement-noise spectral density d z at the input of the frequency demodulator propagates to the frequency-shift-noise spectral density d Δ f at the demodulator output in dependence of cantilever properties and settings of the signal processing electronics in the limit of a negligible tip–surface interaction and a measurement under ultrahigh-vacuum conditions. For a quantification of the noise figures, we calibrate the cantilever displacement signal and determine the transfer function of the signal-processing electronics. From the transfer function and the measured d z, we predict d Δ f for specific filter settings, a given level of detection-system noise spectral density d z ds and the cantilever-thermal-noise spectral density d z th. We find an excellent agreement between the calculated and measured values for d Δ f. Furthermore, we demonstrate that thermal noise in d Δ f, defining the ultimate limit in NC-AFM signal detection, can be kept low by a proper choice of the cantilever whereby its Q-factor should be given most attention. A system with a low-noise signal detection and a suitable cantilever, operated with appropriate filter and feedback-loop settings allows room temperature NC-AFM measurements at a low thermal-noise limit with a significant bandwidth. PMID:23400758
Thermal noise limit for ultra-high vacuum noncontact atomic force microscopy.
Lübbe, Jannis; Temmen, Matthias; Rode, Sebastian; Rahe, Philipp; Kühnle, Angelika; Reichling, Michael
2013-01-01
The noise of the frequency-shift signal Δf in noncontact atomic force microscopy (NC-AFM) consists of cantilever thermal noise, tip-surface-interaction noise and instrumental noise from the detection and signal processing systems. We investigate how the displacement-noise spectral density d(z) at the input of the frequency demodulator propagates to the frequency-shift-noise spectral density d(Δ) (f) at the demodulator output in dependence of cantilever properties and settings of the signal processing electronics in the limit of a negligible tip-surface interaction and a measurement under ultrahigh-vacuum conditions. For a quantification of the noise figures, we calibrate the cantilever displacement signal and determine the transfer function of the signal-processing electronics. From the transfer function and the measured d(z), we predict d(Δ) (f) for specific filter settings, a given level of detection-system noise spectral density d(z) (ds) and the cantilever-thermal-noise spectral density d(z) (th). We find an excellent agreement between the calculated and measured values for d(Δ) (f). Furthermore, we demonstrate that thermal noise in d(Δ) (f), defining the ultimate limit in NC-AFM signal detection, can be kept low by a proper choice of the cantilever whereby its Q-factor should be given most attention. A system with a low-noise signal detection and a suitable cantilever, operated with appropriate filter and feedback-loop settings allows room temperature NC-AFM measurements at a low thermal-noise limit with a significant bandwidth.
Theory and Measurement of Signal-to-Noise Ratio in Continuous-Wave Noise Radar.
Stec, Bronisław; Susek, Waldemar
2018-05-06
Determination of the signal power-to-noise power ratio on the input and output of reception systems is essential to the estimation of their quality and signal reception capability. This issue is especially important in the case when both signal and noise have the same characteristic as Gaussian white noise. This article considers the problem of how a signal-to-noise ratio is changed as a result of signal processing in the correlation receiver of a noise radar in order to determine the ability to detect weak features in the presence of strong clutter-type interference. These studies concern both theoretical analysis and practical measurements of a noise radar with a digital correlation receiver for 9.2 GHz bandwidth. Firstly, signals participating individually in the correlation process are defined and the terms signal and interference are ascribed to them. Further studies show that it is possible to distinguish a signal and a noise on the input and output of a correlation receiver, respectively, when all the considered noises are in the form of white noise. Considering the above, a measurement system is designed in which it is possible to represent the actual conditions of noise radar operation and power measurement of a useful noise signal and interference noise signals—in particular the power of an internal leakage signal between a transmitter and a receiver of the noise radar. The proposed measurement stands and the obtained results show that it is possible to optimize with the use of the equipment and not with the complex processing of a noise signal. The radar parameters depend on its prospective application, such as short- and medium-range radar, ground-penetrating radar, and through-the-wall detection radar.