NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Phase demodulation from a single fringe pattern based on a correlation technique.
Robin, Eric; Valle, Valéry
2004-08-01
We present a method for determining the demodulated phase from a single fringe pattern. This method, based on a correlation technique, searches in a zone of interest for the degree of similarity between a real fringe pattern and a mathematical model. This method, named modulated phase correlation, is tested with different examples.
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
Lemeshewsky, George P.; Schowengerdt, Robert A.
2000-01-01
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.
Eslami, Taban; Saeed, Fahad
2018-04-20
Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.
Wavelet filtered shifted phase-encoded joint transform correlation for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.
Ooi, Chia Huey; Chetty, Madhu; Teng, Shyh Wei
2006-06-23
Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures.
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
A scalable correlator for multichannel diffuse correlation spectroscopy.
Stapels, Christopher J; Kolodziejski, Noah J; McAdams, Daniel; Podolsky, Matthew J; Fernandez, Daniel E; Farkas, Dana; Christian, James F
2016-02-01
Diffuse correlation spectroscopy (DCS) is a technique which enables powerful and robust non-invasive optical studies of tissue micro-circulation and vascular blood flow. The technique amounts to autocorrelation analysis of coherent photons after their migration through moving scatterers and subsequent collection by single-mode optical fibers. A primary cost driver of DCS instruments are the commercial hardware-based correlators, limiting the proliferation of multi-channel instruments for validation of perfusion analysis as a clinical diagnostic metric. We present the development of a low-cost scalable correlator enabled by microchip-based time-tagging, and a software-based multi-tau data analysis method. We will discuss the capabilities of the instrument as well as the implementation and validation of 2- and 8-channel systems built for live animal and pre-clinical settings.
Spectroscopic techniques to study the immune response in human saliva
NASA Astrophysics Data System (ADS)
Nepomnyashchaya, E.; Savchenko, E.; Velichko, E.; Bogomaz, T.; Aksenov, E.
2018-01-01
Studies of the immune response dynamics by means of spectroscopic techniques, i.e., laser correlation spectroscopy and fluorescence spectroscopy, are described. The laser correlation spectroscopy is aimed at measuring sizes of particles in biological fluids. The fluorescence spectroscopy allows studying of the conformational and other structural changings in immune complex. We have developed a new scheme of a laser correlation spectrometer and an original signal processing algorithm. We have suggested a new fluorescence detection scheme based on a prism and an integrating pin diode. The developed system based on the spectroscopic techniques allows studies of complex process in human saliva and opens some prospects for an individual treatment of immune diseases.
Yatsushiro, Satoshi; Sunohara, Saeko; Hayashi, Naokazu; Hirayama, Akihiro; Matsumae, Mitsunori; Atsumi, Hideki; Kuroda, Kagayaki
2018-04-10
A correlation mapping technique delineating delay time and maximum correlation for characterizing pulsatile cerebrospinal fluid (CSF) propagation was proposed. After proofing its technical concept, this technique was applied to healthy volunteers and idiopathic normal pressure hydrocephalus (iNPH) patients. A time-resolved three dimensional-phase contrast (3D-PC) sampled the cardiac-driven CSF velocity at 32 temporal points per cardiac period at each spatial location using retrospective cardiac gating. The proposed technique visualized distributions of propagation delay and correlation coefficient of the PC-based CSF velocity waveform with reference to a waveform at a particular point in the CSF space. The delay time was obtained as the amount of time-shift, giving the maximum correlation for the velocity waveform at an arbitrary location with that at the reference location. The validity and accuracy of the technique were confirmed in a flow phantom equipped with a cardiovascular pump. The technique was then applied to evaluate the intracranial CSF motions in young, healthy (N = 13), and elderly, healthy (N = 13) volunteers and iNPH patients (N = 13). The phantom study demonstrated that root mean square error of the delay time was 2.27%, which was less than the temporal resolution of PC measurement used in this study (3.13% of a cardiac cycle). The human studies showed a significant difference (P < 0.01) in the mean correlation coefficient between the young, healthy group and the other two groups. A significant difference (P < 0.05) was also recognized in standard deviation of the correlation coefficients in intracranial CSF space among all groups. The result suggests that the CSF space compliance of iNPH patients was lower than that of healthy volunteers. The correlation mapping technique allowed us to visualize pulsatile CSF velocity wave propagations as still images. The technique may help to classify diseases related to CSF dynamics, such as iNPH.
NASA Astrophysics Data System (ADS)
Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo
2018-01-01
Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.
Wear Detection of Drill Bit by Image-based Technique
NASA Astrophysics Data System (ADS)
Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul
2018-03-01
Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.
Symmetric Phase Only Filtering for Improved DPIV Data Processing
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
2006-01-01
The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."
Modified signed-digit trinary addition using synthetic wavelet filter
NASA Astrophysics Data System (ADS)
Iftekharuddin, K. M.; Razzaque, M. A.
2000-09-01
The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.
Community Detection for Correlation Matrices
NASA Astrophysics Data System (ADS)
MacMahon, Mel; Garlaschelli, Diego
2015-04-01
A challenging problem in the study of complex systems is that of resolving, without prior information, the emergent, mesoscopic organization determined by groups of units whose dynamical activity is more strongly correlated internally than with the rest of the system. The existing techniques to filter correlations are not explicitly oriented towards identifying such modules and can suffer from an unavoidable information loss. A promising alternative is that of employing community detection techniques developed in network theory. Unfortunately, this approach has focused predominantly on replacing network data with correlation matrices, a procedure that we show to be intrinsically biased because of its inconsistency with the null hypotheses underlying the existing algorithms. Here, we introduce, via a consistent redefinition of null models based on random matrix theory, the appropriate correlation-based counterparts of the most popular community detection techniques. Our methods can filter out both unit-specific noise and system-wide dependencies, and the resulting communities are internally correlated and mutually anticorrelated. We also implement multiresolution and multifrequency approaches revealing hierarchically nested subcommunities with "hard" cores and "soft" peripheries. We apply our techniques to several financial time series and identify mesoscopic groups of stocks which are irreducible to a standard, sectorial taxonomy; detect "soft stocks" that alternate between communities; and discuss implications for portfolio optimization and risk management.
NASA Astrophysics Data System (ADS)
Sarkar, Debdeep; Srivastava, Kumar Vaibhav
2017-02-01
In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.
Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung; Curlander, John C.
1991-01-01
Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.
The Identification and Tracking of Uterine Contractions Using Template Based Cross-Correlation.
McDonald, Sarah C; Brooker, Graham; Phipps, Hala; Hyett, Jon
2017-09-01
The purpose of this paper is to outline a novel method of using template based cross-correlation to identify and track uterine contractions during labour. A purpose built six-channel Electromyography (EMG) device was used to collect data from consenting women during labour and birth. A range of templates were constructed for the purpose of identifying and tracking uterine activity when cross-correlated with the EMG signal. Peak finding techniques were applied on the cross-correlated result to simplify and automate the identification and tracking of contractions. The EMG data showed a unique pattern when a woman was contracting with key features of the contraction signal remaining consistent and identifiable across subjects. Contraction profiles across subjects were automatically identified using template based cross-correlation. Synthetic templates from a rectangular function with a duration of between 5 and 10 s performed best at identifying and tracking uterine activity across subjects. The successful application of this technique provides opportunity for both simple and accurate real-time analysis of contraction data while enabling investigations into the application of techniques such as machine learning which could enable automated learning from contraction data as part of real-time monitoring and post analysis.
Microstructural Effects on Initiation Behavior in HMX
NASA Astrophysics Data System (ADS)
Molek, Christopher; Welle, Eric; Hardin, Barrett; Vitarelli, Jim; Wixom, Ryan; Samuels, Philip
Understanding the role microstructure plays on ignition and growth behavior has been the subject of a significant body of research within the detonation physics community. The pursuit of this understanding is important because safety and performance characteristics have been shown to strongly correlate to particle morphology. Historical studies have often correlated bulk powder characteristics to the performance or safety characteristics of pressed materials. We believe that a clearer and more relevant correlation is made between the pressed microstructure and the observed detonation behavior. This type of assessment is possible, as techniques now exist for the quantification of the pressed microstructures. Our talk will report on experimental efforts that correlate directly measured microstructural characteristics to initiation threshold behavior of HMX based materials. The internal microstructures were revealed using an argon ion cross-sectioning technique. This technique enabled the quantification of density and interface area of the pores within the pressed bed using methods of stereology. These bed characteristics are compared to the initiation threshold behavior of three HMX based materials using an electric gun based test method. Finally, a comparison of experimental threshold data to supporting theoretical efforts will be made.
Correlation-coefficient-based fast template matching through partial elimination.
Mahmood, Arif; Khan, Sohaib
2012-04-01
Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.
Optical correlation techniques in fluid dynamics
NASA Astrophysics Data System (ADS)
Schätzel, K.; Schulz-Dubois, E. O.; Vehrenkamp, R.
1981-04-01
Three flow measurement techniques make use of fast digital correlators. The most widely spread is photon correlation velocimetry using crossed laser beams, and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlation output, this technique yields mean velocity, turbulence level, and even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. In the second method, rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can be used to obtain velocity correlation functions. The most powerful set-up developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyze time-dependent Taylor vortex flow. With two optical systems and trackers, cross-correlation functions reveal phase relations between different vortices. The last method makes use of refractive index fluctuations (eg in two phase flows) instead of scattering particles. Interferometry with bidirectional counting, and digital correlation and probability analysis, constitutes a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
Estimated correlation matrices and portfolio optimization
NASA Astrophysics Data System (ADS)
Pafka, Szilárd; Kondor, Imre
2004-11-01
Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.
Baeten; Bruggeman; Paepen; Carchon
2000-03-01
The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.
Ahmed, Towfiq; Haraldsen, Jason T; Rehr, John J; Di Ventra, Massimiliano; Schuller, Ivan; Balatsky, Alexander V
2014-03-28
Nanopore-based sequencing has demonstrated a significant potential for the development of fast, accurate, and cost-efficient fingerprinting techniques for next generation molecular detection and sequencing. We propose a specific multilayered graphene-based nanopore device architecture for the recognition of single biomolecules. Molecular detection and analysis can be accomplished through the detection of transverse currents as the molecule or DNA base translocates through the nanopore. To increase the overall signal-to-noise ratio and the accuracy, we implement a new 'multi-point cross-correlation' technique for identification of DNA bases or other molecules on the single molecular level. We demonstrate that the cross-correlations between each nanopore will greatly enhance the transverse current signal for each molecule. We implement first-principles transport calculations for DNA bases surveyed across a multilayered graphene nanopore system to illustrate the advantages of the proposed geometry. A time-series analysis of the cross-correlation functions illustrates the potential of this method for enhancing the signal-to-noise ratio. This work constitutes a significant step forward in facilitating fingerprinting of single biomolecules using solid state technology.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Study of TEC and foF2 with the Help of GPS and Ionosonde Data over Maitri, Antarctica
NASA Astrophysics Data System (ADS)
Khatarkar, Prakash; Gwal, Ashok Kumar
Prakash Khatarkar, Purusottam Bhaware, Azad Ahmad Mansoori, Varsha Kachneria, Shweta Thakur, and A. K. Gwal Abstract The behavior of ionosphere can be diagnosed by a number of techniques. The common techniques used are the space based Global Positioning System and the ground based Ionosonde. We have compared the variability of ionospheric parameters by using two different techniques GPS and Ionosonde, during December 2009 to November 2010 at the Indian base station Maitri (11.45E, 70.45S). The comparison between the measurements of two techniques was realized through the Total Electron Content (TEC) parameters derived by using different methods. The comparison was made diurnally, seasonally, polar day and polar night variations and the annually. From our analysis we found that a strong correlation exists between the GPS derived TEC and Ionosonde derived foF2 during the day period while during the night time the correlation is insignificant. At the same time we found that a strong correlation exists between the Ionosonde and GPS derived TEC. The pattern of variation of ionospheric parameters derived from two techniques is strikingly similar indicating that the high degree of synchronization between them. This has a practical applicability by allowing calculating the error in one technique by comparing with other. Keywords: Ionosphere, Ionosonde, GPS, foF2, TEC.
Relative velocity change measurement based on seismic noise analysis in exploration geophysics
NASA Astrophysics Data System (ADS)
Corciulo, M.; Roux, P.; Campillo, M.; Dubuq, D.
2011-12-01
Passive monitoring techniques based on noise cross-correlation analysis are still debated in exploration geophysics even if recent studies showed impressive performance in seismology at larger scale. Time evolution of complex geological structure using noise data includes localization of noise sources and measurement of relative velocity variations. Monitoring relative velocity variations only requires the measurement of phase shifts of seismic noise cross-correlation functions computed for successive time recordings. The existing algorithms, such as the Stretching and the Doublet, classically need great efforts in terms of computation time, making them not practical when continuous dataset on dense arrays are acquired. We present here an innovative technique for passive monitoring based on the measure of the instantaneous phase of noise-correlated signals. The Instantaneous Phase Variation (IPV) technique aims at cumulating the advantages of the Stretching and Doublet methods while proposing a faster measurement of the relative velocity change. The IPV takes advantage of the Hilbert transform to compute in the time domain the phase difference between two noise correlation functions. The relative velocity variation is measured through the slope of the linear regression of the phase difference curve as a function of correlation time. The large amount of noise correlation functions, classically available at exploration scale on dense arrays, allows for a statistical analysis that further improves the precision of the estimation of the velocity change. In this work, numerical tests first aim at comparing the IPV performance to the Stretching and Doublet techniques in terms of accuracy, robustness and computation time. Then experimental results are presented using a seismic noise dataset with five days of continuous recording on 397 geophones spread on a ~1 km-squared area.
Optical Correlation Techniques In Fluid Dynamics
NASA Astrophysics Data System (ADS)
Schatzel, K.; Schulz-DuBois, E. O.; Vehrenkamp, R.
1981-05-01
Three flow measurement techniques make use of fast digital correlators. (1) Most widely spread is photon correlation velocimetry using crossed laser beams and detecting Doppler shifted light scattered by small particles in the flow. Depending on the processing of the photon correlogram, this technique yields mean velocity, turbulence level, or even the detailed probability distribution of one velocity component. An improved data processing scheme is demonstrated on laminar vortex flow in a curved channel. (2) Rate correlation based upon threshold crossings of a high pass filtered laser Doppler signal can he used to obtain velocity correlation functions. The most powerful setup developed in our laboratory uses a phase locked loop type tracker and a multibit correlator to analyse time-dependent Taylor vortex flow. With two optical systems and trackers, crosscorrelation functions reveal phase relations between different vortices. (3) Making use of refractive index fluctuations (e. g. in two phase flows) instead of scattering particles, interferometry with bidirectional fringe counting and digital correlation and probability analysis constitute a new quantitative technique related to classical Schlieren methods. Measurements on a mixing flow of heated and cold air contribute new ideas to the theory of turbulent random phase screens.
Deflection-Based Aircraft Structural Loads Estimation with Comparison to Flight
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Lokos, William A.
2005-01-01
Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. With a reliable strain and structural deformation measurement system this technique was examined. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.
Deflection-Based Structural Loads Estimation From the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Lokos, William A.
2005-01-01
Traditional techniques in structural load measurement entail the correlation of a known load with strain-gage output from the individual components of a structure or machine. The use of strain gages has proved successful and is considered the standard approach for load measurement. However, remotely measuring aerodynamic loads using deflection measurement systems to determine aeroelastic deformation as a substitute to strain gages may yield lower testing costs while improving aircraft performance through reduced instrumentation weight. This technique was examined using a reliable strain and structural deformation measurement system. The objective of this study was to explore the utility of a deflection-based load estimation, using the active aeroelastic wing F/A-18 aircraft. Calibration data from ground tests performed on the aircraft were used to derive left wing-root and wing-fold bending-moment and torque load equations based on strain gages, however, for this study, point deflections were used to derive deflection-based load equations. Comparisons between the strain-gage and deflection-based methods are presented. Flight data from the phase-1 active aeroelastic wing flight program were used to validate the deflection-based load estimation method. Flight validation revealed a strong bending-moment correlation and slightly weaker torque correlation. Development of current techniques, and future studies are discussed.
Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery
Alam, Mohammad S.; Bhuiyan, Sharif M. A.
2014-01-01
In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840
Smartphone based scalable reverse engineering by digital image correlation
NASA Astrophysics Data System (ADS)
Vidvans, Amey; Basu, Saurabh
2018-03-01
There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.
Imaging the square of the correlated two-electron wave function of a hydrogen molecule
Waitz, M.; Bello, R. Y.; Metz, D.; ...
2017-12-22
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less
Imaging the square of the correlated two-electron wave function of a hydrogen molecule.
Waitz, M; Bello, R Y; Metz, D; Lower, J; Trinter, F; Schober, C; Keiling, M; Lenz, U; Pitzer, M; Mertens, K; Martins, M; Viefhaus, J; Klumpp, S; Weber, T; Schmidt, L Ph H; Williams, J B; Schöffler, M S; Serov, V V; Kheifets, A S; Argenti, L; Palacios, A; Martín, F; Jahnke, T; Dörner, R
2017-12-22
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in which electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.
Imaging the square of the correlated two-electron wave function of a hydrogen molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waitz, M.; Bello, R. Y.; Metz, D.
The toolbox for imaging molecules is well-equipped today. Some techniques visualize the geometrical structure, others the electron density or electron orbitals. Molecules are many-body systems for which the correlation between the constituents is decisive and the spatial and the momentum distribution of one electron depends on those of the other electrons and the nuclei. Such correlations have escaped direct observation by imaging techniques so far. Here, we implement an imaging scheme which visualizes correlations between electrons by coincident detection of the reaction fragments after high energy photofragmentation. With this technique, we examine the H 2 two-electron wave function in whichmore » electron-electron correlation beyond the mean-field level is prominent. We visualize the dependence of the wave function on the internuclear distance. High energy photoelectrons are shown to be a powerful tool for molecular imaging. Finally, our study paves the way for future time resolved correlation imaging at FELs and laser based X-ray sources.« less
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2016-05-01
Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.
NASA Astrophysics Data System (ADS)
Elbouz, Marwa; Alfalou, Ayman; Brosseau, Christian
2011-06-01
Home automation is being implemented into more and more domiciles of the elderly and disabled in order to maintain their independence and safety. For that purpose, we propose and validate a surveillance video system, which detects various posture-based events. One of the novel points of this system is to use adapted Vander-Lugt correlator (VLC) and joint-transfer correlator (JTC) techniques to make decisions on the identity of a patient and his three-dimensional (3-D) positions in order to overcome the problem of crowd environment. We propose a fuzzy logic technique to get decisions on the subject's behavior. Our system is focused on the goals of accuracy, convenience, and cost, which in addition does not require any devices attached to the subject. The system permits one to study and model subject responses to behavioral change intervention because several levels of alarm can be incorporated according different situations considered. Our algorithm performs a fast 3-D recovery of the subject's head position by locating eyes within the face image and involves a model-based prediction and optical correlation techniques to guide the tracking procedure. The object detection is based on (hue, saturation, value) color space. The system also involves an adapted fuzzy logic control algorithm to make a decision based on information given to the system. Furthermore, the principles described here are applicable to a very wide range of situations and robust enough to be implementable in ongoing experiments.
Finite element model correlation of a composite UAV wing using modal frequencies
NASA Astrophysics Data System (ADS)
Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.
2007-04-01
The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
Characterizing multivariate decoding models based on correlated EEG spectral features
McFarland, Dennis J.
2013-01-01
Objective Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Methods Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). Results The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Conclusions Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. Significance While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. PMID:23466267
NASA Astrophysics Data System (ADS)
El-Sebakhy, Emad A.
2009-09-01
Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.
Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses
NASA Astrophysics Data System (ADS)
Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.
1995-05-01
Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.
Relationships of pediatric anthropometrics for CT protocol selection.
Phillips, Grace S; Stanescu, Arta-Luana; Alessio, Adam M
2014-07-01
Determining the optimal CT technique to minimize patient radiation exposure while maintaining diagnostic utility requires patient-specific protocols that are based on patient characteristics. This work develops relationships between different anthropometrics and CT image noise to determine appropriate protocol classification schemes. We measured the image noise in 387 CT examinations of pediatric patients (222 boys, 165 girls) of the chest, abdomen, and pelvis and generated mathematic relationships between image noise and patient lateral and anteroposterior dimensions, age, and weight. At the chest level, lateral distance (ld) across the body is strongly correlated with weight (ld = 0.23 × weight + 16.77; R(2) = 0.93) and is less well correlated with age (ld = 1.10 × age + 17.13; R(2) = 0.84). Similar trends were found for anteroposterior dimensions and at the abdomen level. Across all studies, when acquisition-specific parameters are factored out of the noise, the log of image noise was highly correlated with lateral distance (R(2) = 0.72) and weight (R(2) = 0.72) and was less correlated with age (R(2) = 0.62). Following first-order relationships of image noise and scanner technique, plots were formed to show techniques that could achieve matched noise across the pediatric population. Patient lateral distance and weight are essentially equally effective metrics to base maximum technique settings for pediatric patient-specific protocols. These metrics can also be used to help categorize appropriate reference levels for CT technique and size-specific dose estimates across the pediatric population.
Yatsushiro, Satoshi; Hirayama, Akihiro; Matsumae, Mitsunori; Kajiwara, Nao; Abdullah, Afnizanfaizal; Kuroda, Kagayaki
2014-01-01
Correlation time mapping based on magnetic resonance (MR) velocimetry has been applied to pulsatile cerebrospinal fluid (CSF) motion to visualize the pressure transmission between CSF at different locations and/or between CSF and arterial blood flow. Healthy volunteer experiments demonstrated that the technique exhibited transmitting pulsatile CSF motion from CSF space in the vicinity of blood vessels with short delay and relatively high correlation coefficients. Patient and healthy volunteer experiments indicated that the properties of CSF motion were different from the healthy volunteers. Resultant images in healthy volunteers implied that there were slight individual difference in the CSF driving source locations. Clinical interpretation for these preliminary results is required to apply the present technique for classifying status of hydrocephalus.
Counsell, Serena J; Boardman, James P
2005-10-01
Preterm birth is associated with a high prevalence of neuropsychiatric impairment in childhood and adolescence, but the neural correlates underlying these disorders are not fully understood. Quantitative magnetic resonance imaging techniques have been used to investigate subtle differences in cerebral growth and development among children and adolescents born preterm or with very low birth weight. Diffusion tensor imaging and computer-assisted morphometric techniques (including voxel-based morphometry and deformation-based morphometry) have identified abnormalities in tissue microstructure and cerebral morphology among survivors of preterm birth at different ages, and some of these alterations have specific functional correlates. This chapter reviews the literature reporting differential brain development following preterm birth, with emphasis on the morphological changes that correlate with neuropsychiatric impairment.
NASA Astrophysics Data System (ADS)
Mohamad, M.; Sabbri, A. R. M.; Mat Jafri, M. Z.; Omar, A. F.
2014-11-01
Near infrared (NIR) spectroscopy technique serves as an important tool for the measurement of moisture content of skin owing to the advantages it has over the other techniques. The purpose of the study is to develop a correlation between NIR spectrometer with electrical conventional techniques for skin moisture measurement. A non-invasive measurement of moisture content of skin was performed on different part of human face and hand under control environment (temperature 21 ± 1 °C, relative humidity 45 ± 5 %). Ten healthy volunteers age between 21-25 (male and female) participated in this study. The moisture content of skin was measured using DermaLab® USB Moisture Module, Scalar Moisture Checker and NIR spectroscopy (NIRQuest). Higher correlation was observed between NIRQuest and Dermalab moisture probe with a coefficient of determination (R2) above 70 % for all the subjects. However, the value of R2 between NIRQuest and Moisture Checker was observed to be lower with the R2 values ranges from 51.6 to 94.4 %. The correlation of NIR spectroscopy technique successfully developed for measuring moisture content of the skin. The analysis of this correlation can help to establish novel instruments based on an optical system in clinical used especially in the dermatology field.
Combined magnetic and gravity analysis
NASA Technical Reports Server (NTRS)
Hinze, W. J.; Braile, L. W.; Chandler, V. W.; Mazella, F. E.
1975-01-01
Efforts are made to identify methods of decreasing magnetic interpretation ambiguity by combined gravity and magnetic analysis, to evaluate these techniques in a preliminary manner, to consider the geologic and geophysical implications of correlation, and to recommend a course of action to evaluate methods of correlating gravity and magnetic anomalies. The major thrust of the study was a search and review of the literature. The literature of geophysics, geology, geography, and statistics was searched for articles dealing with spatial correlation of independent variables. An annotated bibliography referencing the Germane articles and books is presented. The methods of combined gravity and magnetic analysis techniques are identified and reviewed. A more comprehensive evaluation of two types of techniques is presented. Internal correspondence of anomaly amplitudes is examined and a combined analysis is done utilizing Poisson's theorem. The geologic and geophysical implications of gravity and magnetic correlation based on both theoretical and empirical relationships are discussed.
Advancements of two dimensional correlation spectroscopy in protein researches
NASA Astrophysics Data System (ADS)
Tao, Yanchun; Wu, Yuqing; Zhang, Liping
2018-05-01
The developments of two-dimensional correlation spectroscopy (2DCOS) applications in protein studies are discussed, especially for the past two decades. The powerful utilities of 2DCOS combined with various analytical techniques in protein studies are summarized. The emphasis is on the vibration spectroscopic techniques including IR, NIR, Raman and optical activity (ROA), as well as vibration circular dichroism (VCD) and fluorescence spectroscopy. In addition, some new developments, such as hetero-spectral 2DCOS, moving-window correlation, and model based correlation, are also reviewed for their utility in the investigation of the secondary structure, denaturation, folding and unfolding changes of protein. Finally, the new possibility and challenges of 2DCOS in protein research are highlighted as well.
Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.
Singh, Anurag; Dandapat, Samarendra
2017-04-01
In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.
Camacho-Basallo, Paula; Yáñez-Vico, Rosa-María; Solano-Reina, Enrique; Iglesias-Linares, Alejandro
2017-03-01
The need for accurate techniques of estimating age has sharply increased in line with the rise in illegal migration and the political, economic and socio-demographic problems that this poses in developed countries today. The methods routinely employed for determining chronological age are mainly based on determining skeletal maturation using radiological techniques. The objective of this study was to correlate five different methods for assessing skeletal maturation. 606 radiographs of growing patients were analyzed, and each patient was classified according to two cervical vertebral-based methods, two hand-wrist-based methods and one tooth-based method. Spearman's rank-order correlation coefficient was applied to assess the relationship between chronological age and the five methods of assessing maturation, as well as correlations between the five methods (p < 0.05). Spearman's rank correlation coefficients for chronological age and cervical vertebral maturation stage using both methods were 0.656/0.693 (p < 0.001), respectively, for males. For females, the correlation was stronger for both methods. The correlation coefficients for chronological age against the two hand-wrist assessment methods were statistically significant only for Fishman's method, 0.722 (p < 0.001) and 0.839 (p < 0.001), respectively for males and females. The cervical vertebral, hand-wrist and dental maturation methods of assessment were all found to correlate strongly with each other, irrespective of gender, except for Grave and Brown's method. The results found the strongest correlation between the second molars and females, and the second premolar and males. This study sheds light on and correlates with the five radiographic methods most commonly used for assessing skeletal maturation in a Spanish population in southern Europe.
An image registration-based technique for noninvasive vascular elastography
NASA Astrophysics Data System (ADS)
Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza
2018-02-01
Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.
Sound Source Identification Through Flow Density Measurement and Correlation With Far Field Noise
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2001-01-01
Sound sources in the plumes of unheated round jets, in the Mach number range 0.6 to 1.8, were investigated experimentally using "casuality" approach, where air density fluctuations in the plumes were correlated with the far field noise. The air density was measured using a newly developed Molecular Rayleigh scattering based technique, which did not require any seeding. The reference at the end provides a detailed description of the measurement technique.
Recent advancement in the field of two-dimensional correlation spectroscopy
NASA Astrophysics Data System (ADS)
Noda, Isao
2008-07-01
The recent advancement in the field of 2D correlation spectroscopy is reviewed with the emphasis on a number of papers published during the last two years. Topics covered by this comprehensive review include books, review articles, and noteworthy developments in the theory and applications of 2D correlation spectroscopy. New 2D correlation techniques are discussed, such as kernel analysis and augmented 2D correlation, model-based correlation, moving window analysis, global phase angle, covariance and correlation coefficient mapping, sample-sample correlation, hybrid and hetero correlation, pretreatment and transformation of data, and 2D correlation combined with other chemometrics techniques. Perturbation methods of both static (e.g., temperature, composition, pressure and stress, spatial distribution and orientation) and dynamic types (e.g., rheo-optical and acoustic, chemical reactions and kinetics, H/D exchange, sorption and diffusion) currently in use are examined. Analytical techniques most commonly employed in 2D correlation spectroscopy are IR, Raman, and NIR, but the growing use of other probes is also noted, including fluorescence, emission, Raman optical activity and vibrational circular dichroism, X-ray absorption and scattering, NMR, mass spectrometry, and even chromatography. The field of applications for 2D correlation spectroscopy is very diverse, encompassing synthetic polymers, liquid crystals, Langmuir-Blodgett films, proteins and peptides, natural polymers and biomaterials, pharmaceuticals, food and agricultural products, water, solutions, inorganic, organic, hybrid or composite materials, and many more.
The correlated k-distribution technique as applied to the AVHRR channels
NASA Technical Reports Server (NTRS)
Kratz, David P.
1995-01-01
Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.
NASA Astrophysics Data System (ADS)
Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.
2016-09-01
Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.
Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model
NASA Astrophysics Data System (ADS)
Arumugam, S.; Libera, D.
2017-12-01
Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.
Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia
2016-01-01
An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996
Signal Processing Studies of a Simulated Laser Doppler Velocimetry-Based Acoustic Sensor
1990-10-17
investigated using spectral correlation methods. Results indicate that it may be possible to extend demonstrated LDV-based acoustic sensor sensitivities using higher order processing techniques. (Author)
A Review of Correlated Noise in Exoplanet Light Curves
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, J.; Blecic, J.; Hardy, R. A.; Hardin, M.
2013-10-01
A number of the occultation light curves of exoplanets exhibit time-correlated residuals (a.k.a. correlated or red noise) in their model fits. The correlated noise might arise from inaccurate models or unaccounted astrophysical or telescope systematics. A correct assessment of the correlated noise is important to determine true signal-to-noise ratios of a planet's physical parameters. Yet, there are no in-depth statistical studies in the literature for some of the techniques currently used (RMS-vs-bin size plot, prayer beads, and wavelet-based modeling). We subjected these correlated-noise assessment techniques to basic tests on synthetic data sets to characterize their features and limitations. Initial results indicate, for example, that, sometimes the RMS-vs-bin size plots present artifacts when the bin size is similar to the observation duration. Further, the prayer beads doesn't correctly increase the uncertainties to compensate for the lack of accuracy if there is correlated noise. We have applied these techniques to several Spitzer secondary-eclipse hot-Jupiter light curves and discuss their implications. This work was supported in part by NASA planetary atmospheres grant NNX13AF38G and Astrophysics Data Analysis Program NNX12AI69G.
Utilization of volume correlation filters for underwater mine identification in LIDAR imagery
NASA Astrophysics Data System (ADS)
Walls, Bradley
2008-04-01
Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.
Characterizing multivariate decoding models based on correlated EEG spectral features.
McFarland, Dennis J
2013-07-01
Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Network analysis of a financial market based on genuine correlation and threshold method
NASA Astrophysics Data System (ADS)
Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.
2011-10-01
A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.
Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc
2015-09-21
In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.
Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M
2016-12-01
The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
Measuring Time-of-Flight in an Ultrasonic LPS System Using Generalized Cross-Correlation
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Álvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic Local Positioning System (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (Direct-Sequence Code Division Multiple Access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the Generalized Cross-Correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the Time Differences of Arrival (TDOA) between a reference beacon and the others. PMID:22346645
Measuring time-of-flight in an ultrasonic LPS system using generalized cross-correlation.
Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Alvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos
2011-01-01
In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic local positioning system (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (direct-sequence code Division multiple access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the generalized cross-correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the time differences of arrival (TDOA) between a reference beacon and the others.
Kumar, Manoj; Vijayakumar, A; Rosen, Joseph
2017-09-14
We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.
Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, S.T.C.; Knowlton, R.; Hoo, K.S.
1995-12-31
Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less
NASA Astrophysics Data System (ADS)
Tong, Minh Q.; Hasan, M. Monirul; Gregory, Patrick D.; Shah, Jasmine; Park, B. Hyle; Hirota, Koji; Liu, Junze; Choi, Andy; Low, Karen; Nam, Jin
2017-02-01
We demonstrate a computationally-efficient optical coherence elastography (OCE) method based on fringe washout. By introducing ultrasound in alternating depth profile, we can obtain information on the mechanical properties of a sample within acquisition of a single image. This can be achieved by simply comparing the intensity in adjacent depth profiles in order to quantify the degree of fringe washout. Phantom agar samples with various densities were measured and quantified by our OCE technique, the correlation to Young's modulus measurement by atomic force micrscopy (AFM) were observed. Knee cartilage samples of monoiodo acetate-induced arthiritis (MIA) rat models were utilized to replicate cartilage damages where our proposed OCE technique along with intensity and birefringence analyses and AFM measurements were applied. The results indicate that our OCE technique shows a correlation to the techniques as polarization-sensitive OCT, AFM Young's modulus measurements and histology were promising. Our OCE is applicable to any of existing OCT systems and demonstrated to be computationally-efficient.
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
DiBartola, Alex C; Everhart, Joshua S; Magnussen, Robert A; Carey, James L; Brophy, Robert H; Schmitt, Laura C; Flanigan, David C
2016-06-01
Compare histological outcomes after microfracture (MF), autologous chondrocyte implantation (ACI), and osteochondral autograft transfer (OATS). Literature review using PubMed MEDLINE, SCOPUS, Cumulative Index for Nursing and Allied Health Literature (CINAHL), and Cochrane Collaboration Library. Inclusion criteria limited to English language studies International Cartilage Repair Society (ICRS) grading criteria for cartilage analysis after ACI (autologous chondrocyte implantation), MF (microfracture), or OATS (osteochondral autografting) repair techniques. Thirty-three studies investigating 1511 patients were identified. Thirty evaluated ACI or one of its subtypes, six evaluated MF, and seven evaluated OATS. There was no evidence of publication bias (Begg's p=0.48). No statistically significant correlation was found between percent change in clinical outcome and percent biopsies showing ICRS Excellent scores (R(2)=0.05, p=0.38). Percent change in clinical outcome and percent of biopsies showing only hyaline cartilage were significantly associated (R(2)=0.24, p=0.024). Mean lesion size and histological outcome were not correlated based either on percent ICRS Excellent (R(2)=0.03, p=0.50) or percent hyaline cartilage only (R(2)=0.01, p=0.67). Most common lesion location and histological outcome were not correlated based either on percent ICRS Excellent (R(2)=0.03, p=0.50) or percent hyaline cartilage only (R(2)=0.01, p=0.67). Microfracture has poorer histologic outcomes than other cartilage repair techniques. OATS repairs primarily are comprised of hyaline cartilage, followed closely by cell-based techniques, but no significant difference was found cartilage quality using ICRS grading criteria among OATS, ACI-C, MACI, and ACI-P. IV, meta-analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Baier, S.; Rochet, A.; Hofmann, G.; Kraut, M.; Grunwaldt, J.-D.
2015-06-01
We report on a new modular setup on a silicon-based microreactor designed for correlative spectroscopic, scattering, and analytic on-line gas investigations for in situ studies of heterogeneous catalysts. The silicon microreactor allows a combination of synchrotron radiation based techniques (e.g., X-ray diffraction and X-ray absorption spectroscopy) as well as infrared thermography and Raman spectroscopy. Catalytic performance can be determined simultaneously by on-line product analysis using mass spectrometry. We present the design of the reactor, the experimental setup, and as a first example for an in situ study, the catalytic partial oxidation of methane showing the applicability of this reactor for in situ studies.
Baier, S; Rochet, A; Hofmann, G; Kraut, M; Grunwaldt, J-D
2015-06-01
We report on a new modular setup on a silicon-based microreactor designed for correlative spectroscopic, scattering, and analytic on-line gas investigations for in situ studies of heterogeneous catalysts. The silicon microreactor allows a combination of synchrotron radiation based techniques (e.g., X-ray diffraction and X-ray absorption spectroscopy) as well as infrared thermography and Raman spectroscopy. Catalytic performance can be determined simultaneously by on-line product analysis using mass spectrometry. We present the design of the reactor, the experimental setup, and as a first example for an in situ study, the catalytic partial oxidation of methane showing the applicability of this reactor for in situ studies.
Towards a global-scale ambient noise cross-correlation data base
NASA Astrophysics Data System (ADS)
Ermert, Laura; Fichtner, Andreas; Sleeman, Reinoud
2014-05-01
We aim to obtain a global-scale data base of ambient seismic noise correlations. This database - to be made publicly available at ORFEUS - will enable us to study the distribution of microseismic and hum sources, and to perform multi-scale full waveform inversion for crustal and mantle structure. Ambient noise tomography has developed into a standard technique. According to theory, cross-correlations equal inter-station Green's functions only if the wave field is equipartitioned or the sources are isotropically distributed. In an attempt to circumvent these assumptions, we aim to investigate possibilities to directly model noise cross-correlations and invert for their sources using adjoint techniques. A data base containing correlations of 'gently' preprocessed noise, excluding preprocessing steps which are explicitly taken to reduce the influence of a non-isotropic source distribution like spectral whitening, is a key ingredient in this undertaking. Raw data are acquired from IRIS/FDSN and ORFEUS. We preprocess and correlate the time series using a tool based on the Python package Obspy which is run in parallel on a cluster of the Swiss National Supercomputing Centre. Correlation is done in two ways: Besides the classical cross-correlation function, the phase cross-correlation is calculated, which is an amplitude-independent measure of waveform similarity and therefore insensitive to high-energy events. Besides linear stacks of these correlations, instantaneous phase stacks are calculated which can be applied as optional weight, enhancing coherent portions of the traces and facilitating the emergence of a meaningful signal. The _STS1 virtual network by IRIS contains about 250 globally distributed stations, several of which have been operating for more than 20 years. It is the first data collection we will use for correlations in the hum frequency range, as the STS-1 instrument response is flat in the largest part of the period range where hum is observed, up to a period of about 300 seconds. Thus they provide us with the best-suited measurements for hum.
Damage detection and isolation via autocorrelation: a step toward passive sensing
NASA Astrophysics Data System (ADS)
Chang, Y. S.; Yuan, F. G.
2018-03-01
Passive sensing technique may eliminate the need of expending power from actuators and thus provide a means of developing a compact and simple structural health monitoring system. More importantly, it may provide a solution for monitoring the aircraft subjected to environmental loading from air flow during operation. In this paper, a non-contact auto-correlation based technique is exploited as a feasibility study for passive sensing application to detect damage and isolate the damage location. Its theoretical basis bears some resemblance to reconstructing Green's function from diffusive wavefield through cross-correlation. Localized high pressure air from air compressor are randomly and continuously applied on the one side surface of the aluminum panels through the air blow gun. A laser Doppler vibrometer (LDV) was used to scan a 90 mm × 90 mm area to create a 6 × 6 2D-array signals from the opposite side of the panels. The scanned signals were auto-correlated to reconstruct a "selfimpulse response" (or Green's function). The premise for stably reconstructing the accurate Green's function requires long sensing times. For a 609.6 mm × 609.6 mm flat aluminum panel, the sensing times roughly at least four seconds is sufficient to establish converged Green's function through correlation. For the integral stiffened aluminum panel, the geometrical features of the panel expedite the formation of the diffusive wavefield and thus shorten the sensing times. The damage is simulated by gluing a magnet onto the panels. Reconstructed Green's functions (RGFs) are used for damage detection and damage isolation based on an imaging condition with mean square deviation of the RGFs from the pristine and the damaged structure and the results are shown in color maps. The auto-correlation based technique is shown to consistently detect the simulated damage, image and isolate the damage in the structure subjected to high pressure air excitation. This technique may be transformed into passive sensing applied on the aircraft during operation.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Detection of circuit-board components with an adaptive multiclass correlation filter
NASA Astrophysics Data System (ADS)
Diaz-Ramirez, Victor H.; Kober, Vitaly
2008-08-01
A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.
SCI model structure determination program (OSR) user's guide. [optimal subset regression
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.
NASA Astrophysics Data System (ADS)
Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke
2017-05-01
Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.
Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM
NASA Astrophysics Data System (ADS)
Ahn, Chang-Jun
In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
FPGA design of correlation-based pattern recognition
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.
USDA-ARS?s Scientific Manuscript database
Nondestructive methods based on fluorescence hyperspectral imaging (HSI) techniques were developed in order to detect worms on fresh-cut lettuce. The optimal wavebands for detecting worms on fresh-cut lettuce were investigated using the one-way ANOVA analysis and correlation analysis. The worm detec...
Identification of Noise Sources in High Speed Jets via Correlation Measurements: A Review
NASA Technical Reports Server (NTRS)
Bridges, James (Technical Monitor); Panda, Jayanta
2005-01-01
Significant advancement has been made in the last few years to identify noise sources in high speed jets via direct correlation measurements. In this technique turbulent fluctuations in the flow are correlated with far field acoustics signatures. In the 1970 s there was a surge of work using mostly intrusive probes, and a few using Laser Doppler Velocimetry, to measure turbulent fluctuations. The later experiments established "shear noise" as the primary source for the shallow angle noise. Various interpretations and criticisms from this time are described in the review. Recent progress in the molecular Rayleigh scattering based technique has provided a completely non-intrusive means of measuring density and velocity fluctuations. This has brought a renewed interest on correlation measurements. We have performed five different sets of experiments in single stream jets of different Mach number, temperature ratio and nozzle configurations. The present paper tries to summarize the correlation data from these works.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baier, S.; Rochet, A.; Hofmann, G.
2015-06-15
We report on a new modular setup on a silicon-based microreactor designed for correlative spectroscopic, scattering, and analytic on-line gas investigations for in situ studies of heterogeneous catalysts. The silicon microreactor allows a combination of synchrotron radiation based techniques (e.g., X-ray diffraction and X-ray absorption spectroscopy) as well as infrared thermography and Raman spectroscopy. Catalytic performance can be determined simultaneously by on-line product analysis using mass spectrometry. We present the design of the reactor, the experimental setup, and as a first example for an in situ study, the catalytic partial oxidation of methane showing the applicability of this reactor formore » in situ studies.« less
NASA Astrophysics Data System (ADS)
Roubidoux, J. A.; Jackson, J. E.; Lasseigne, A. N.; Mishra, B.; Olson, D. L.
2010-02-01
This paper correlates nonlinear material properties to nondestructive electronic measurements by using wave analysis techniques (e.g. Perturbation Methods) and incorporating higher-order phenomena. The correlations suggest that nondestructive electronic property measurements and practices can be used to assess thin films, surface layers, and other advanced materials that exhibit modified behaviors based on their space-charged interfacial behavior.
Classifying Correlation Matrices into Relatively Homogeneous Subgroups: A Cluster Analytic Approach
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Chan, Wai
2005-01-01
Researchers are becoming interested in combining meta-analytic techniques and structural equation modeling to test theoretical models from a pool of studies. Most existing procedures are based on the assumption that all correlation matrices are homogeneous. Few studies have addressed what the next step should be when studies being analyzed are…
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Detection of physiological noise in resting state fMRI using machine learning.
Ash, Tom; Suckling, John; Walter, Martin; Ooi, Cinly; Tempelmann, Claus; Carpenter, Adrian; Williams, Guy
2013-04-01
We present a technique for predicting cardiac and respiratory phase on a time point by time point basis, from fMRI image data. These predictions have utility in attempts to detrend effects of the physiological cycles from fMRI image data. We demonstrate the technique both in the case where it can be trained on a subject's own data, and when it cannot. The prediction scheme uses a multiclass support vector machine algorithm. Predictions are demonstrated to have a close fit to recorded physiological phase, with median Pearson correlation scores between recorded and predicted values of 0.99 for the best case scenario (cardiac cycle trained on a subject's own data) down to 0.83 for the worst case scenario (respiratory predictions trained on group data), as compared to random chance correlation score of 0.70. When predictions were used with RETROICOR--a popular physiological noise removal tool--the effects are compared to using recorded phase values. Using Fourier transforms and seed based correlation analysis, RETROICOR is shown to produce similar effects whether recorded physiological phase values are used, or they are predicted using this technique. This was seen by similar levels of noise reduction noise in the same regions of the Fourier spectra, and changes in seed based correlation scores in similar regions of the brain. This technique has a use in situations where data from direct monitoring of the cardiac and respiratory cycles are incomplete or absent, but researchers still wish to reduce this source of noise in the image data. Copyright © 2011 Wiley Periodicals, Inc.
The trophic classification of lakes using ERTS multispectral scanner data
NASA Technical Reports Server (NTRS)
Blackwell, R. J.; Boland, D. H.
1975-01-01
Lake classification methods based on the use of ERTS data are described. Preliminary classification results obtained by multispectral and digital image processing techniques indicate satisfactory correlation between ERTS data and EPA-supplied water analysis. Techniques for determining lake trophic levels using ERTS data are examined, and data obtained for 20 lakes are discussed.
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
- and Scene-Guided Integration of Tls and Photogrammetric Point Clouds for Landslide Monitoring
NASA Astrophysics Data System (ADS)
Zieher, T.; Toschi, I.; Remondino, F.; Rutzinger, M.; Kofler, Ch.; Mejia-Aguilar, A.; Schlögel, R.
2018-05-01
Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor's data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.
Monitoring damage growth in titanium matrix composites using acoustic emission
NASA Technical Reports Server (NTRS)
Bakuckas, J. G., Jr.; Prosser, W. H.; Johnson, W. S.
1993-01-01
The application of the acoustic emission (AE) technique to locate and monitor damage growth in titanium matrix composites (TMC) was investigated. Damage growth was studied using several optical techniques including a long focal length, high magnification microscope system with image acquisition capabilities. Fracture surface examinations were conducted using a scanning electron microscope (SEM). The AE technique was used to locate damage based on the arrival times of AE events between two sensors. Using model specimens exhibiting a dominant failure mechanism, correlations were established between the observed damage growth mechanisms and the AE results in terms of the events amplitude. These correlations were used to monitor the damage growth process in laminates exhibiting multiple modes of damage. Results revealed that the AE technique is a viable and effective tool to monitor damage growth in TMC.
Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) for spaceborne measurements of CO
NASA Astrophysics Data System (ADS)
Johnson, Brian R.; Kampe, Thomas U.; Cook, William B.; Miecznik, Grzegorz; Novelli, Paul C.; Snell, Hilary E.; Turner-Valle, Jennifer A.
2003-11-01
An instrument concept for an Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) has been developed for measuring tropospheric carbon monoxide (CO) from space. The concept is based upon a correlation technique similar in nature to multi-order Fabry-Perot (FP) interferometer or gas filter radiometer techniques, which simultaneously measure atmospheric emission from several infrared vibration-rotation lines of CO. Correlation techniques provide a multiplex advantage for increased throughput, high spectral resolution and selectivity necessary for profiling tropospheric CO. Use of unconventional multilayer interference filter designs leads to improvement in CO spectral line correlation compared with the traditional FP multi-order technique, approaching the theoretical performance of gas filter correlation radiometry. In this implementation, however, the gas cell is replaced with a simple, robust solid interference filter. In addition to measuring CO, the correlation filter technique can be applied to measurements of other important gases such as carbon dioxide, nitrous oxide and methane. Imaging the scene onto a 2-D detector array enables a limited range of spectral sampling owing to the field-angle dependence of the filter transmission function. An innovative anamorphic optical system provides a relatively large instrument field-of-view for imaging along the orthogonal direction across the detector array. An important advantage of the IMOFPS concept is that it is a small, low mass and high spectral resolution spectrometer having no moving parts. A small, correlation spectrometer like IMOFPS would be well suited for global observations of CO2, CO, and CH4 from low Earth or regional observations from Geostationary orbit. A prototype instrument is in development for flight demonstration on an airborne platform with potential applications to atmospheric chemistry, wild fire and biomass burning, and chemical dispersion monitoring.
Image correlation nondestructive evaluation of impact damage in a glass fiber composite
NASA Technical Reports Server (NTRS)
Russell, Samuel S.
1990-01-01
Presented in viewgraph format, digital image correlation, damage in fibrous composites, and damaged coupons (cross-ply scotchply GI-Ep laminate) are outlined. It was concluded that the image correlation accuracy was 0.03 percent; strains can be processed through Tsai-Hill failure criteria to qualify the damage; the statistical data base must be generated to evaluate certainty of the damage estimate; size effects need consideration; and better numerical techniques are needed.
Discriminating Induced-Microearthquakes Using New Seismic Features
NASA Astrophysics Data System (ADS)
Mousavi, S. M.; Horton, S.
2016-12-01
We studied characteristics of induced-microearthquakes on the basis of the waveforms recorded on a limited number of surface receivers using machine-learning techniques. Forty features in the time, frequency, and time-frequency domains were measured on each waveform, and several techniques such as correlation-based feature selection, Artificial Neural Networks (ANNs), Logistic Regression (LR) and X-mean were used as research tools to explore the relationship between these seismic features and source parameters. The results show that spectral features have the highest correlation to source depth. Two new measurements developed as seismic features for this study, spectral centroids and 2D cross-correlations in the time-frequency domain, performed better than the common seismic measurements. These features can be used by machine learning techniques for efficient automatic classification of low energy signals recorded at one or more seismic stations. We applied the technique to 440 microearthquakes-1.7Reference: Mousavi, S.M., S.P. Horton, C. A. Langston, B. Samei, (2016) Seismic features and automatic discrimination of deep and shallow induced-microearthquakes using neural network and logistic regression, Geophys. J. Int. doi: 10.1093/gji/ggw258.
Joint temporal density measurements for two-photon state characterization.
Kuzucu, Onur; Wong, Franco N C; Kurimura, Sunao; Tovstonog, Sergey
2008-10-10
We demonstrate a technique for characterizing two-photon quantum states based on joint temporal correlation measurements using time-resolved single-photon detection by femtosecond up-conversion. We measure for the first time the joint temporal density of a two-photon entangled state, showing clearly the time anticorrelation of the coincident-frequency entangled photon pair generated by ultrafast spontaneous parametric down-conversion under extended phase-matching conditions. The new technique enables us to manipulate the frequency entanglement by varying the down-conversion pump bandwidth to produce a nearly unentangled two-photon state that is expected to yield a heralded single-photon state with a purity of 0.88. The time-domain correlation technique complements existing frequency-domain measurement methods for a more complete characterization of photonic entanglement.
NASA Astrophysics Data System (ADS)
Riasati, Vahid R.
2016-05-01
In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.
Pänkäälä, Mikko; Paasio, Ari
2014-01-01
Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563
ERIC Educational Resources Information Center
Boucher, Victor J.
2008-01-01
Purpose: The objective was to identify acoustic correlates of laryngeal muscle fatigue in conditions of vocal effort. Method: In a previous study, a technique of electromyography (EMG) served to define physiological signs of "voice fatigue" in laryngeal muscles involved in voicing. These signs correspond to spectral changes in contraction…
Measuring Brain Connectivity: Diffusion Tensor Imaging Validates Resting State Temporal Correlations
Skudlarski, Pawel; Jagannathan, Kanchana; Calhoun, Vince D.; Hampson, Michelle; Skudlarska, Beata A.; Pearlson, Godfrey
2015-01-01
Diffusion tensor imaging (DTI) and resting state temporal correlations (RSTC) are two leading techniques for investigating the connectivity of the human brain. They have been widely used to investigate the strength of anatomical and functional connections between distant brain regions in healthy subjects, and in clinical populations. Though they are both based on magnetic resonance imaging (MRI) they have not yet been compared directly. In this work both techniques were employed to create global connectivity matrices covering the whole brain gray matter. This allowed for direct comparisons between functional connectivity measured by RSTC with anatomical connectivity quantified using DTI tractography. We found that connectivity matrices obtained using both techniques showed significant agreement. Connectivity maps created for a priori defined anatomical regions showed significant correlation, and furthermore agreement was especially high in regions showing strong overall connectivity, such as those belonging to the default mode network. Direct comparison between functional RSTC and anatomical DTI connectivity, presented here for the first time, links two powerful approaches for investigating brain connectivity and shows their strong agreement. It provides a crucial multi-modal validation for resting state correlations as representing neuronal connectivity. The combination of both techniques presented here allows for further combining them to provide richer representation of brain connectivity both in the healthy brain and in clinical conditions. PMID:18771736
Skudlarski, Pawel; Jagannathan, Kanchana; Calhoun, Vince D; Hampson, Michelle; Skudlarska, Beata A; Pearlson, Godfrey
2008-11-15
Diffusion tensor imaging (DTI) and resting state temporal correlations (RSTC) are two leading techniques for investigating the connectivity of the human brain. They have been widely used to investigate the strength of anatomical and functional connections between distant brain regions in healthy subjects, and in clinical populations. Though they are both based on magnetic resonance imaging (MRI) they have not yet been compared directly. In this work both techniques were employed to create global connectivity matrices covering the whole brain gray matter. This allowed for direct comparisons between functional connectivity measured by RSTC with anatomical connectivity quantified using DTI tractography. We found that connectivity matrices obtained using both techniques showed significant agreement. Connectivity maps created for a priori defined anatomical regions showed significant correlation, and furthermore agreement was especially high in regions showing strong overall connectivity, such as those belonging to the default mode network. Direct comparison between functional RSTC and anatomical DTI connectivity, presented here for the first time, links two powerful approaches for investigating brain connectivity and shows their strong agreement. It provides a crucial multi-modal validation for resting state correlations as representing neuronal connectivity. The combination of both techniques presented here allows for further combining them to provide richer representation of brain connectivity both in the healthy brain and in clinical conditions.
Hierarchical clustering of EMD based interest points for road sign detection
NASA Astrophysics Data System (ADS)
Khan, Jesmin; Bhuiyan, Sharif; Adhami, Reza
2014-04-01
This paper presents an automatic road traffic signs detection and recognition system based on hierarchical clustering of interest points and joint transform correlation. The proposed algorithm consists of the three following stages: interest points detection, clustering of those points and similarity search. At the first stage, good discriminative, rotation and scale invariant interest points are selected from the image edges based on the 1-D empirical mode decomposition (EMD). We propose a two-step unsupervised clustering technique, which is adaptive and based on two criterion. In this context, the detected points are initially clustered based on the stable local features related to the brightness and color, which are extracted using Gabor filter. Then points belonging to each partition are reclustered depending on the dispersion of the points in the initial cluster using position feature. This two-step hierarchical clustering yields the possible candidate road signs or the region of interests (ROIs). Finally, a fringe-adjusted joint transform correlation (JTC) technique is used for matching the unknown signs with the existing known reference road signs stored in the database. The presented framework provides a novel way to detect a road sign from the natural scenes and the results demonstrate the efficacy of the proposed technique, which yields a very low false hit rate.
Non-invasive diagnostics of the maxillary and frontal sinuses based on diode laser gas spectroscopy.
Lewander, Märta; Lindberg, Sven; Svensson, Tomas; Siemund, Roger; Svanberg, Katarina; Svanberg, Sune
2012-03-01
Suspected, but objectively absent, rhinosinusitis constitutes a major cause of visits to the doctor, high health care costs, and the over-prescription of antibiotics, contributing to the serious problem of resistant bacteria. This situation is largely due to a lack of reliable and widely applicable diagnostic methods. A novel method for the diagnosis of rhinosinusitis based on non-intrusive diode laser gas spectroscopy is presented. The technique is based on light absorption by free gas (oxygen and water vapour) inside the sinuses, and has the potential to be a complementary diagnostic tool in primary health care. The method was evaluated on 40 patients with suspected sinus problems, referred to the diagnostic radiology clinic for low-dose computed tomography (CT), which was used as the reference technique. The data obtained with the new laser-based method correlated well with the grading of opacification and ventilation using CT. The sensitivity and specificity were estimated to be 93% and 61%, respectively, for the maxillary sinuses, and 94% and 86%, respectively, for the frontal sinuses. Good reproducibility was shown. The laser-based technique presents real-time clinical data that correlate well to CT findings, while being non-intrusive and avoiding the use of ionizing radiation.
NASA Astrophysics Data System (ADS)
Corciulo, M.; Roux, P.; Campillo, M.; Dubucq, D.
2010-12-01
Passive imaging from noise cross-correlation is a consolidated analysis applied at continental and regional scale whereas its use at local scale for seismic exploration purposes is still uncertain. The development of passive imaging by cross-correlation analysis is based on the extraction of the Green’s function from seismic noise data. In a completely random field in time and space, the cross-correlation permits to retrieve the complete Green’s function whatever the complexity of the medium. At the exploration scale and at frequency above 2 Hz, the noise sources are not ideally distributed around the stations which strongly affect the extraction of the direct arrivals from the noise cross-correlation process. In order to overcome this problem, the coda waves extracted from noise correlation could be useful. Coda waves describe long and scattered paths sampling the medium in different ways such that they become sensitive to weak velocity variations without being dependent on the noise source distribution. Indeed, scatters in the medium behave as a set of secondary noise sources which randomize the spatial distribution of noise sources contributing to the coda waves in the correlation process. We developed a new technique to measure weak velocity changes based on the computation of the local phase variations (instantaneous phase variation or IPV) of the cross-correlated signals. This newly-developed technique takes advantage from the doublet and stretching techniques classically used to monitor weak velocity variation from coda waves. We apply IPV to data acquired in Northern America (Canada) on a 1-km side square seismic network laid out by 397 stations. Data used to study temporal variations are cross-correlated signals computed on 10-minutes ambient noise in the frequency band 2-5 Hz. As the data set was acquired over five days, about 660 files are processed to perform a complete temporal analysis for each stations pair. The IPV permits to estimate the phase shift all over the signal length without any assumption on the medium velocity. The instantaneous phase is computed using the Hilbert transform of the signal. For each stations pair, we measure the phase difference between successive correlation functions calculated for 10 minutes of ambient noise. We then fit the instantaneous phase shift using a first-order polynomial function. The measure of the velocity variation corresponds to the slope of this fit. Compared to other techniques, the advantage of IPV is a very fast procedure which efficiently provides the measure of velocity variation on large data sets. Both experimental results and numerical tests on synthetic signals will be presented to assess the reliability of the IPV technique, with comparison to the doublet and stretching methods.
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
Determination of high temperature strains using a PC based vision system
NASA Astrophysics Data System (ADS)
McNeill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.
1992-09-01
With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.
Brain imaging and cognitive dysfunctions in Huntington's disease
Montoya, Alonso; Price, Bruce H.; Menear, Matthew; Lepage, Martin
2006-01-01
Recent decades have seen tremendous growth in our understanding of the cognitive dysfunctions observed in Huntington's disease (HD). Advances in neuroimaging have contributed greatly to this growth. We reviewed the role that structural and functional neuroimaging techniques have played in elucidating the cerebral bases of the cognitive deficits associated with HD. We conducted a computer-based search using PubMed and PsycINFO databases to retrieve studies of patients with HD published between 1965 and December 2004 that reported measures on cognitive tasks and used neuroimaging techniques. Structural neuroimaging has provided important evidence of morphological brain changes in HD. Striatal and cortical atrophy are the most common findings, and they correlate with cognitive deficits in attention, working memory and executive functions. Functional studies have also demonstrated correlations between striatal dysfunction and cognitive performance. Striatal hypoperfusion and decreased glucose utilization correlate with executive dysfunction. Hypometabolism also occurs throughout the cerebral cortex and correlates with performance on recognition memory, language and perceptual tests. Measures of presynaptic and postsynaptic dopamine biochemistry have also correlated with measurements of episodic memory, speed of processing and executive functioning. Aided by the results of numerous neuroimaging studies, it is becoming increasingly clear that cognitive deficits in HD involve abnormal connectivity between the basal ganglia and cortical areas. In the future, neuroimaging techniques may shed the most light on the pathophysiology of HD by defining neurodegenerative disease phenotypes as a valuable tool for knowing when patients become “symptomatic,” having been in a gene-positive presymptomatic state, and as a biomarker in following the disease, thereby providing a prospect for improved patient care. PMID:16496032
NASA Astrophysics Data System (ADS)
Noda, Isao
2014-07-01
A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.
Geographically correlated errors observed from a laser-based short-arc technique
NASA Astrophysics Data System (ADS)
Bonnefond, P.; Exertier, P.; Barlier, F.
1999-07-01
The laser-based short-arc technique has been developed in order to avoid local errors which affect the dynamical orbit computation, such as those due to mismodeling in the geopotential. It is based on a geometric method and consists in fitting short arcs (about 4000 km), issued from a global orbit, with satellite laser ranging tracking measurements from a ground station network. Ninety-two TOPEX/Poseidon (T/P) cycles of laser-based short-arc orbits have then been compared to JGM-2 and JGM-3 T/P orbits computed by the Precise Orbit Determination (POD) teams (Service d'Orbitographie Doris/Centre National d'Etudes Spatiales and Goddard Space Flight Center/NASA) over two areas: (1) the Mediterranean area and (2) a part of the Pacific (including California and Hawaii) called hereafter the U.S. area. Geographically correlated orbit errors in these areas are clearly evidenced: for example, -2.6 cm and +0.7 cm for the Mediterranean and U.S. areas, respectively, relative to JGM-3 orbits. However, geographically correlated errors (GCE) which are commonly linked to errors in the gravity model, can also be due to systematic errors in the reference frame and/or to biases in the tracking measurements. The short-arc technique being very sensitive to such error sources, our analysis however demonstrates that the induced geographical systematic effects are at the level of 1-2 cm on the radial orbit component. Results are also compared with those obtained with the GPS-based reduced dynamic technique. The time-dependent part of GCE has also been studied. Over 6 years of T/P data, coherent signals in the radial component of T/P Precise Orbit Ephemeris (POE) are clearly evidenced with a time period of about 6 months. In addition, impact of time varying-error sources coming from the reference frame and the tracking data accuracy has been analyzed, showing a possible linear trend of about 0.5-1 mm/yr in the radial component of T/P POE.
Stabilization techniques for reactive aggregate in soil-cement base course : technical summary.
DOT National Transportation Integrated Search
2003-01-01
The objectives of this research are 1) to identify the mineralogical properties of soil-cement bases which have heaved or can potentially heave, 2) to simulate expansion of cement-stabilized soil in the laboratory, 3) to correlate expansion with the ...
Carrier-phase multipath corrections for GPS-based satellite attitude determination
NASA Technical Reports Server (NTRS)
Axelrad, A.; Reichert, P.
2001-01-01
This paper demonstrates the high degree of spatial repeatability of these errors for a spacecraft environment and describes a correction technique, termed the sky map method, which exploits the spatial correlation to correct measurements and improve the accuracy of GPS-based attitude solutions.
Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites
NASA Technical Reports Server (NTRS)
Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.
2010-01-01
This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.
A combined method for correlative 3D imaging of biological samples from macro to nano scale
NASA Astrophysics Data System (ADS)
Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Antonopoulos, Georgios C.; Knudsen, Lars; Wrede, Christoph; Izykowski, Nicole; Grothausmann, Roman; Jonigk, Danny; Ochs, Matthias; Ripken, Tammo; Kühnel, Mark P.; Meyer, Heiko
2016-10-01
Correlative analysis requires examination of a specimen from macro to nano scale as well as applicability of analytical methods ranging from morphological to molecular. Accomplishing this with one and the same sample is laborious at best, due to deformation and biodegradation during measurements or intermediary preparation steps. Furthermore, data alignment using differing imaging techniques turns out to be a complex task, which considerably complicates the interconnection of results. We present correlative imaging of the accessory rat lung lobe by combining a modified Scanning Laser Optical Tomography (SLOT) setup with a specially developed sample preparation method (CRISTAL). CRISTAL is a resin-based embedding method that optically clears the specimen while allowing sectioning and preventing degradation. We applied and correlated SLOT with Multi Photon Microscopy, histological and immunofluorescence analysis as well as Transmission Electron Microscopy, all in the same sample. Thus, combining CRISTAL with SLOT enables the correlative utilization of a vast variety of imaging techniques.
Feasibility study consisting of a review of contour generation methods from stereograms
NASA Technical Reports Server (NTRS)
Kim, C. J.; Wyant, J. C.
1980-01-01
A review of techniques for obtaining contour information from stereo pairs is given. Photogrammetric principles including a description of stereoscopic vision are presented. The use of conventional contour generation methods, such as the photogrammetric plotting technique, electronic correlator, and digital correlator are described. Coherent optical techniques for contour generation are discussed and compared to the electronic correlator. The optical techniques are divided into two categories: (1) image plane operation and (2) frequency plane operation. The description of image plane correlators are further divided into three categories: (1) image to image correlator, (2) interferometric correlator, and (3) positive negative transparencies. The frequency plane correlators are divided into two categories: (1) correlation of Fourier transforms, and (2) filtering techniques.
NASA Astrophysics Data System (ADS)
Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.
U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
Nanostructure studies of strongly correlated materials.
Wei, Jiang; Natelson, Douglas
2011-09-01
Strongly correlated materials exhibit an amazing variety of phenomena, including metal-insulator transitions, colossal magnetoresistance, and high temperature superconductivity, as strong electron-electron and electron-phonon couplings lead to competing correlated ground states. Recently, researchers have begun to apply nanostructure-based techniques to this class of materials, examining electronic transport properties on previously inaccessible length scales, and applying perturbations to drive systems out of equilibrium. We review progress in this area, particularly emphasizing work in transition metal oxides (Fe(3)O(4), VO(2)), manganites, and high temperature cuprate superconductors. We conclude that such nanostructure-based studies have strong potential to reveal new information about the rich physics at work in these materials.
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolen, James; Harris, Philip; Marzani, Simone
Here, we explore the scale-dependence and correlations of jet substructure observables to improve upon existing techniques in the identification of highly Lorentz-boosted objects. Modified observables are designed to remove correlations from existing theoretically well-understood observables, providing practical advantages for experimental measurements and searches for new phenomena. We study such observables in W jet tagging and provide recommendations for observables based on considerations beyond signal and background efficiencies.
Techniques for noise removal and registration of TIMS data
Hummer-Miller, S.
1990-01-01
Extracting subtle differences from highly correlated thermal infrared aircraft data is possible with appropriate noise filters, constructed and applied in the spatial frequency domain. This paper discusses a heuristic approach to designing noise filters for removing high- and low-spatial frequency striping and banding. Techniques for registering thermal infrared aircraft data to a topographic base using Thematic Mapper data are presented. The noise removal and registration techniques are applied to TIMS thermal infrared aircraft data. -Author
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Saager, Rolf B; Balu, Mihaela; Crosignani, Viera; Sharif, Ata; Durkin, Anthony J; Kelly, Kristen M; Tromberg, Bruce J
2015-06-01
The combined use of nonlinear optical microscopy and broadband reflectance techniques to assess melanin concentration and distribution thickness in vivo over the full range of Fitzpatrick skin types is presented. Twelve patients were measured using multiphoton microscopy (MPM) and spatial frequency domain spectroscopy (SFDS) on both dorsal forearm and volar arm, which are generally sun-exposed and non-sun-exposed areas, respectively. Both MPM and SFDS measured melanin volume fractions between (skin type I non-sun-exposed) and 20% (skin type VI sun exposed). MPM measured epidermal (anatomical) thickness values ~30-65 μm, while SFDS measured melanin distribution thickness based on diffuse optical path length. There was a strong correlation between melanin concentration and melanin distribution (epidermal) thickness measurements obtained using the two techniques. While SFDS does not have the ability to match the spatial resolution of MPM, this study demonstrates that melanin content as quantified using SFDS is linearly correlated with epidermal melanin as measured using MPM (R² = 0.8895). SFDS melanin distribution thickness is correlated to MPM values (R² = 0.8131). These techniques can be used individually and/or in combination to advance our understanding and guide therapies for pigmentation-related conditions as well as light-based treatments across a full range of skin types.
Fiber fault location utilizing traffic signal in optical network.
Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi
2013-10-07
We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
Consideration of correlativity between litho and etching shape
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Mito, Hiroaki; Shinoda, Shinichi; Toyoda, Yasutaka
2012-03-01
We developed an effective method for evaluating the correlation of shape of Litho and Etching pattern. The purpose of this method, makes the relations of the shape after that is the etching pattern an index in wafer same as a pattern shape on wafer made by a lithography process. Therefore, this method measures the characteristic of the shape of the wafer pattern by the lithography process and can predict the hotspot pattern shape by the etching process. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used wafer CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and lithography management, and this has a big impact on the semiconductor market that centers on the semiconductor business. 2-dimensional shape of wafer quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. In this study, we conducted experiments for correlation method of the pattern (Measurement Based Contouring) as two-dimensional litho and etch evaluation technique. That is, observation of the identical position of a litho and etch was considered. It is possible to analyze variability of the edge of the same position with high precision.
Modified signed-digit arithmetic based on redundant bit representation.
Huang, H; Itoh, M; Yatagai, T
1994-09-10
Fully parallel modified signed-digit arithmetic operations are realized based on redundant bit representation of the digits proposed. A new truth-table minimizing technique is presented based on redundant-bitrepresentation coding. It is shown that only 34 minterms are enough for implementing one-step modified signed-digit addition and subtraction with this new representation. Two optical implementation schemes, correlation and matrix multiplication, are described. Experimental demonstrations of the correlation architecture are presented. Both architectures use fixed minterm masks for arbitrary-length operands, taking full advantage of the parallelism of the modified signed-digit number system and optics.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Study on fast measurement of sugar content of yogurt using Vis/NIR spectroscopy techniques
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli
2006-09-01
In order to measuring the sugar content of yogurt rapidly, a fast measurement of sugar content of yogurt using Vis/NIR-spectroscopy techniques was established. 25 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The sugar content of yogurt on positions scanned by spectrum were measured by a sugar content meter. The mathematical model between sugar content and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS). The correlation coefficient of sugar content based on PLS model is more than 0.894, and standard error of calibration (SEC) is 0.356, standard error of prediction (SEP) is 0.389. Through predicting the sugar content quantitatively of 35 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0.934. The results show the good to excellent prediction performance. The Vis/NIR spectroscopy technique had significantly greater accuracy for determining the sugar content. It was concluded that the Vis/NIRS measurement technique seems reliable to assess the fast measurement of sugar content of yogurt, and a new method for the measurement of sugar content of yogurt was established.
Advances in Magnetic Resonance Imaging of the Skull Base
Kirsch, Claudia F.E.
2014-01-01
Introduction Over the past 20 years, magnetic resonance imaging (MRI) has advanced due to new techniques involving increased magnetic field strength and developments in coils and pulse sequences. These advances allow increased opportunity to delineate the complex skull base anatomy and may guide the diagnosis and treatment of the myriad of pathologies that can affect the skull base. Objectives The objective of this article is to provide a brief background of the development of MRI and illustrate advances in skull base imaging, including techniques that allow improved conspicuity, characterization, and correlative physiologic assessment of skull base pathologies. Data Synthesis Specific radiographic illustrations of increased skull base conspicuity including the lower cranial nerves, vessels, foramina, cerebrospinal fluid (CSF) leaks, and effacement of endolymph are provided. In addition, MRIs demonstrating characterization of skull base lesions, such as recurrent cholesteatoma versus granulation tissue or abscess versus tumor, are also provided as well as correlative clinical findings in CSF flow studies in a patient pre- and post-suboccipital decompression for a Chiari I malformation. Conclusions This article illustrates MRI radiographic advances over the past 20 years, which have improved clinicians' ability to diagnose, define, and hopefully improve the treatment and outcomes of patients with underlying skull base pathologies. PMID:25992137
Gilbert, Fabian; Böhm, Dirk; Eden, Lars; Schmalzl, Jonas; Meffert, Rainer H; Köstler, Herbert; Weng, Andreas M; Ziegler, Dirk
2016-08-22
The Goutallier Classification is a semi quantitative classification system to determine the amount of fatty degeneration in rotator cuff muscles. Although initially proposed for axial computer tomography scans it is currently applied to magnet-resonance-imaging-scans. The role for its clinical use is controversial, as the reliability of the classification has been shown to be inconsistent. The purpose of this study was to compare the semi quantitative MRI-based Goutallier Classification applied by 5 different raters to experimental MR spectroscopic quantitative fat measurement in order to determine the correlation between this classification system and the true extent of fatty degeneration shown by spectroscopy. MRI-scans of 42 patients with rotator cuff tears were examined by 5 shoulder surgeons and were graduated according to the MRI-based Goutallier Classification proposed by Fuchs et al. Additionally the fat/water ratio was measured with MR spectroscopy using the experimental SPLASH technique. The semi quantitative grading according to the Goutallier Classification was statistically correlated with the quantitative measured fat/water ratio using Spearman's rank correlation. Statistical analysis of the data revealed only fair correlation of the Goutallier Classification system and the quantitative fat/water ratio with R = 0.35 (p < 0.05). By dichotomizing the scale the correlation was 0.72. The interobserver and intraobserver reliabilities were substantial with R = 0.62 and R = 0.74 (p < 0.01). The correlation between the semi quantitative MRI based Goutallier Classification system and MR spectroscopic fat measurement is weak. As an adequate estimation of fatty degeneration based on standard MRI may not be possible, quantitative methods need to be considered in order to increase diagnostic safety and thus provide patients with ideal care in regard to the amount of fatty degeneration. Spectroscopic MR measurement may increase the accuracy of the Goutallier classification and thus improve the prediction of clinical results after rotator cuff repair. However, these techniques are currently only available in an experimental setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Mary A.; Tangyunyong, Paiboon; Cole, Edward I.
2016-01-14
Laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes (LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increased leakage ismore » not present in devices without AVM signals. Transmission electron microscopy analysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].« less
Miller, Mary A.; Tangyunyong, Paiboon; Edward I. Cole, Jr.
2016-01-12
In this study, laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes(LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increasedmore » leakage is not present in devices without AVM signals. Transmission electron microscopyanalysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].« less
High-throughput electrical characterization for robust overlay lithography control
NASA Astrophysics Data System (ADS)
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
2017-03-01
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
Security of statistical data bases: invasion of privacy through attribute correlational modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palley, M.A.
This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queriesmore » of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.« less
NASA Astrophysics Data System (ADS)
Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.
2017-02-01
We present a three-step approach based on the commercial VIAPIX® module for road traffic sign recognition and identification. Firstly, detection in a scene of all objects having characteristics of traffic signs is performed. This is followed by a first-level recognition based on correlation which consists in making a comparison between each detected object with a set of reference images of a database. Finally, a second level of identification allows us to confirm or correct the previous identification. In this study, we perform a correlation-based analysis by combining and adapting the Vander Lugt correlator with the nonlinear joint transformation correlator (JTC). Of particular significance, this approach permits to make a reliable decision on road traffic sign identification. We further discuss a robust scheme allowing us to track a detected road traffic sign in a video sequence for the purpose of increasing the decision performance of our system. This approach can have broad practical applications in the maintenance and rehabilitation of transportation infrastructure, or for drive assistance.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Ohto, Tatsuhiko; Usui, Kota; Hasegawa, Taisuke; Bonn, Mischa; Nagata, Yuki
2015-09-28
Interfacial water structures have been studied intensively by probing the O-H stretch mode of water molecules using sum-frequency generation (SFG) spectroscopy. This surface-specific technique is finding increasingly widespread use, and accordingly, computational approaches to calculate SFG spectra using molecular dynamics (MD) trajectories of interfacial water molecules have been developed and employed to correlate specific spectral signatures with distinct interfacial water structures. Such simulations typically require relatively long (several nanoseconds) MD trajectories to allow reliable calculation of the SFG response functions through the dipole moment-polarizability time correlation function. These long trajectories limit the use of computationally expensive MD techniques such as ab initio MD and centroid MD simulations. Here, we present an efficient algorithm determining the SFG response from the surface-specific velocity-velocity correlation function (ssVVCF). This ssVVCF formalism allows us to calculate SFG spectra using a MD trajectory of only ∼100 ps, resulting in the substantial reduction of the computational costs, by almost an order of magnitude. We demonstrate that the O-H stretch SFG spectra at the water-air interface calculated by using the ssVVCF formalism well reproduce those calculated by using the dipole moment-polarizability time correlation function. Furthermore, we applied this ssVVCF technique for computing the SFG spectra from the ab initio MD trajectories with various density functionals. We report that the SFG responses computed from both ab initio MD simulations and MD simulations with an ab initio based force field model do not show a positive feature in its imaginary component at 3100 cm(-1).
A technique for estimating dry deposition velocities based on similarity with latent heat flux
NASA Astrophysics Data System (ADS)
Pleim, Jonathan E.; Finkelstein, Peter L.; Clarke, John F.; Ellestad, Thomas G.
Field measurements of chemical dry deposition are needed to assess impacts and trends of airborne contaminants on the exposure of crops and unmanaged ecosystems as well as for the development and evaluation of air quality models. However, accurate measurements of dry deposition velocities require expensive eddy correlation measurements and can only be practically made for a few chemical species such as O 3 and CO 2. On the other hand, operational dry deposition measurements such as those used in large area networks involve relatively inexpensive standard meteorological and chemical measurements but rely on less accurate deposition velocity models. This paper describes an intermediate technique which can give accurate estimates of dry deposition velocity for chemical species which are dominated by stomatal uptake such as O 3 and SO 2. This method can give results that are nearly the quality of eddy correlation measurements of trace gas fluxes at much lower cost. The concept is that bulk stomatal conductance can be accurately estimated from measurements of latent heat flux combined with standard meteorological measurements of humidity, temperature, and wind speed. The technique is tested using data from a field experiment where high quality eddy correlation measurements were made over soybeans. Over a four month period, which covered the entire growth cycle, this technique showed very good agreement with eddy correlation measurements for O 3 deposition velocity.
Infrared thermal imaging of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Watt, David; Mchugh, John
1990-01-01
A technique for analyzing infrared atmospheric images to obtain cross-wind measurement is presented. The technique is based on Taylor's frozen turbulence hypothesis and uses cross-correlation of successive images to obtain a measure of the cross-wind velocity in a localized focal region. The technique is appealing because it can possibly be combined with other IR forward look capabilities and may provide information about turbulence intensity. The current research effort, its theoretical basis, and its applicability to windshear detection are described.
Reproducibility of telomere length assessment: an international collaborative study.
Martin-Ruiz, Carmen M; Baird, Duncan; Roger, Laureline; Boukamp, Petra; Krunic, Damir; Cawthon, Richard; Dokter, Martin M; van der Harst, Pim; Bekaert, Sofie; de Meyer, Tim; Roos, Goran; Svenson, Ulrika; Codd, Veryan; Samani, Nilesh J; McGlynn, Liane; Shiels, Paul G; Pooley, Karen A; Dunning, Alison M; Cooper, Rachel; Wong, Andrew; Kingston, Andrew; von Zglinicki, Thomas
2015-10-01
Telomere length is a putative biomarker of ageing, morbidity and mortality. Its application is hampered by lack of widely applicable reference ranges and uncertainty regarding the present limits of measurement reproducibility within and between laboratories. We instigated an international collaborative study of telomere length assessment: 10 different laboratories, employing 3 different techniques [Southern blotting, single telomere length analysis (STELA) and real-time quantitative PCR (qPCR)] performed two rounds of fully blinded measurements on 10 human DNA samples per round to enable unbiased assessment of intra- and inter-batch variation between laboratories and techniques. Absolute results from different laboratories differed widely and could thus not be compared directly, but rankings of relative telomere lengths were highly correlated (correlation coefficients of 0.63-0.99). Intra-technique correlations were similar for Southern blotting and qPCR and were stronger than inter-technique ones. However, inter-laboratory coefficients of variation (CVs) averaged about 10% for Southern blotting and STELA and more than 20% for qPCR. This difference was compensated for by a higher dynamic range for the qPCR method as shown by equal variance after z-scoring. Technical variation per laboratory, measured as median of intra- and inter-batch CVs, ranged from 1.4% to 9.5%, with differences between laboratories only marginally significant (P = 0.06). Gel-based and PCR-based techniques were not different in accuracy. Intra- and inter-laboratory technical variation severely limits the usefulness of data pooling and excludes sharing of reference ranges between laboratories. We propose to establish a common set of physical telomere length standards to improve comparability of telomere length estimates between laboratories. © The Author 2014. Published by Oxford University Press on behalf of the International Epidemiological Association.
Bounding the Set of Classical Correlations of a Many-Body System
NASA Astrophysics Data System (ADS)
Fadel, Matteo; Tura, Jordi
2017-12-01
We present a method to certify the presence of Bell correlations in experimentally observed statistics, and to obtain new Bell inequalities. Our approach is based on relaxing the conditions defining the set of correlations obeying a local hidden variable model, yielding a convergent hierarchy of semidefinite programs (SDP's). Because the size of these SDP's is independent of the number of parties involved, this technique allows us to characterize correlations in many-body systems. As an example, we illustrate our method with the experimental data presented in Science 352, 441 (2016), 10.1126/science.aad8665.
NASA Astrophysics Data System (ADS)
Huang, D.; Wang, G.
2014-12-01
Stochastic simulation of spatially distributed ground-motion time histories is important for performance-based earthquake design of geographically distributed systems. In this study, we develop a novel technique to stochastically simulate regionalized ground-motion time histories using wavelet packet analysis. First, a transient acceleration time history is characterized by wavelet-packet parameters proposed by Yamamoto and Baker (2013). The wavelet-packet parameters fully characterize ground-motion time histories in terms of energy content, time- frequency-domain characteristics and time-frequency nonstationarity. This study further investigates the spatial cross-correlations of wavelet-packet parameters based on geostatistical analysis of 1500 regionalized ground motion data from eight well-recorded earthquakes in California, Mexico, Japan and Taiwan. The linear model of coregionalization (LMC) is used to develop a permissible spatial cross-correlation model for each parameter group. The geostatistical analysis of ground-motion data from different regions reveals significant dependence of the LMC structure on regional site conditions, which can be characterized by the correlation range of Vs30 in each region. In general, the spatial correlation and cross-correlation of wavelet-packet parameters are stronger if the site condition is more homogeneous. Using the regional-specific spatial cross-correlation model and cokriging technique, wavelet packet parameters at unmeasured locations can be best estimated, and regionalized ground-motion time histories can be synthesized. Case studies and blind tests demonstrated that the simulated ground motions generally agree well with the actual recorded data, if the influence of regional-site conditions is considered. The developed method has great potential to be used in computational-based seismic analysis and loss estimation in a regional scale.
Electric vehicle chassis dynamometer test methods at JPL and their correlation to track tests
NASA Technical Reports Server (NTRS)
Marte, J.; Bryant, J.
1983-01-01
Early in its electric vehicle (EV) test program, JPL recognized that EV test procedures were too vague and too loosely defined to permit much meaningful data to be obtained from the testing. Therefore, JPL adopted more stringent test procedures and chose the chassis dynamometer rather than the track as its principal test technique. Through the years, test procedures continued to evolve towards a methodology based on chassis dynamometers which would exhibit good correlation with track testing. Based on comparative dynamometer and track test results on the ETV-1 vehicle, the test methods discussed in this report demonstrate a means by which excellent track-to-dynamometer correlation can be obtained.
$$t\\bar{t}$$ Spin Correlations at D0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Yvonne
2013-01-01
The heaviest known elementary particle today, the top quark, has been discovered in 1995 by the CDF and D0 collaborations at the Tevatron collider at Fermilab. Its high mass and short lifetime, shorter than the timescale for hadronization, makes the top quark a special particle to study. Due to the short lifetime, the top quark's spin information is preserved in the decay products. In this article we discuss the studies of ttbar spin correlations at D0, testing the full chain from production to decay. In particular, we present a measurement using angular information and an analysis using a matrix-element basedmore » technique. The application of the matrix-element based technique to the ttbar dilepton and semileponic final state resulted in the first evidence for non-vanishing ttbar spin correlations.« less
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shcheslavskiy, V. I.; Institute of Biomedical Technologies, Nizhny Novgorod State Medical Academy, Minin and Pozharsky Square, 10/1, Nizhny Novgorod 603005; Neubauer, A.
We present a lifetime imaging technique that simultaneously records the fluorescence and phosphorescence lifetime images in confocal laser scanning systems. It is based on modulating a high-frequency pulsed laser synchronously with the pixel clock of the scanner, and recording the fluorescence and phosphorescence signals by multidimensional time-correlated single photon counting board. We demonstrate our technique on the recording of the fluorescence/phosphorescence lifetime images of human embryonic kidney cells at different environmental conditions.
The Cloud Detection and Ultraviolet Monitoring Experiment (CLUE)
NASA Technical Reports Server (NTRS)
Barbier, Louis M.; Loh, Eugene C.; Krizmanic, John F.; Sokolsky, Pierre; Streitmatter, Robert E.
2004-01-01
In this paper we describe a new balloon instrument - CLUE - which is designed to monitor ultraviolet (uv) nightglow levels and determine cloud cover and cloud heights with a CO2 slicing technique. The CO2 slicing technique is based on the MODIS instrument on NASA's Aqua and Terra spacecraft. CLUE will provide higher spatial resolution (0.5 km) and correlations between the uv and the cloud cover.
NASA Astrophysics Data System (ADS)
Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan
2014-03-01
We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.
Fourier-Mellin moment-based intertwining map for image encryption
NASA Astrophysics Data System (ADS)
Kaur, Manjit; Kumar, Vijay
2018-03-01
In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.
A cross correlation PIV technique using electro-optical image separation
NASA Astrophysics Data System (ADS)
Wirth, M.; Baritaud, T. A.
1996-11-01
A new approach for 2-dimensional flow field investigation by PIV has been developed for measurements with high spatial resolution without the well known directional ambiguity. This feature of the technique is especially important for measurements in flows with reversal regions or strong turbulent motion as in-cylinder engine measurements. The major aim of the work was to achieve the benefits of cross correlation PIV image evaluation at reasonable cost and under application of common single wavelength double pulsed laser systems as they are mainly used for PIV experiments. The development of the technique is based on polarization rotation of the light scattered by the seeding particles by means of a ferroelectric liquid crystal half wave plate (FLC). Measurement samples from low turbulent jets and the flow in the wake of a cylinder are being presented.
Quantitative analysis and feature recognition in 3-D microstructural data sets
NASA Astrophysics Data System (ADS)
Lewis, A. C.; Suh, C.; Stukowski, M.; Geltmacher, A. B.; Spanos, G.; Rajan, K.
2006-12-01
A three-dimensional (3-D) reconstruction of an austenitic stainless-steel microstructure was used as input for an image-based finite-element model to simulate the anisotropic elastic mechanical response of the microstructure. The quantitative data-mining and data-warehousing techniques used to correlate regions of high stress with critical microstructural features are discussed. Initial analysis of elastic stresses near grain boundaries due to mechanical loading revealed low overall correlation with their location in the microstructure. However, the use of data-mining and feature-tracking techniques to identify high-stress outliers revealed that many of these high-stress points are generated near grain boundaries and grain edges (triple junctions). These techniques also allowed for the differentiation between high stresses due to boundary conditions of the finite volume reconstructed, and those due to 3-D microstructural features.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
NASA Astrophysics Data System (ADS)
Hast, J.; Okkonen, M.; Heikkinen, H.; Krehut, L.; Myllylä, R.
2006-06-01
A self-mixing interferometer is proposed to measure nanometre-scale optical path length changes in the interferometer's external cavity. As light source, the developed technique uses a blue emitting GaN laser diode. An external reflector, a silicon mirror, driven by a piezo nanopositioner is used to produce an interference signal which is detected with the monitor photodiode of the laser diode. Changing the optical path length of the external cavity introduces a phase difference to the interference signal. This phase difference is detected using a signal processing algorithm based on Pearson's correlation coefficient and cubic spline interpolation techniques. The results show that the average deviation between the measured and actual displacements of the silicon mirror is 3.1 nm in the 0-110 nm displacement range. Moreover, the measured displacements follow linearly the actual displacement of the silicon mirror. Finally, the paper considers the effects produced by the temperature and current stability of the laser diode as well as dispersion effects in the external cavity of the interferometer. These reduce the sensor's measurement accuracy especially in long-term measurements.
Wire Crimp Termination Verification Using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Perey, Daniel F.; Cramer, K. Elliott; Yost, William T.
2007-01-01
The development of a new ultrasonic measurement technique to quantitatively assess wire crimp terminations is discussed. The amplitude change of a compressional ultrasonic wave propagating through the junction of a crimp termination and wire is shown to correlate with the results of a destructive pull test, which is a standard for assessing crimp wire junction quality. Various crimp junction pathologies such as undercrimping, missing wire strands, incomplete wire insertion, partial insulation removal, and incorrect wire gauge are ultrasonically tested, and their results are correlated with pull tests. Results show that the nondestructive ultrasonic measurement technique consistently (as evidenced with destructive testing) predicts good crimps when ultrasonic transmission is above a certain threshold amplitude level. A physics-based model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying this technique while wire crimps are installed is also presented. The instrument is based on a two-jaw type crimp tool suitable for butt-splice type connections. Finally, an approach for application to multipin indenter type crimps will be discussed.
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
Trout, Andrew T; Batie, Matthew R; Gupta, Anita; Sheridan, Rachel M; Tiao, Gregory M; Towbin, Alexander J
2017-11-01
Radiogenomics promises to identify tumour imaging features indicative of genomic or proteomic aberrations that can be therapeutically targeted allowing precision personalised therapy. An accurate radiological-pathological correlation is critical to the process of radiogenomic characterisation of tumours. An accurate correlation, however, is difficult to achieve with current pathological sectioning techniques which result in sectioning in non-standard planes. The purpose of this work is to present a technique to standardise hepatic sectioning to facilitateradiological-pathological correlation. We describe a process in which three-dimensional (3D)-printed specimen boxes based on preoperative cross-sectional imaging (CT and MRI) can be used to facilitate pathological sectioning in standard planes immediately on hepatic resection enabling improved tumour mapping. We have applied this process in 13 patients undergoing hepatectomy and have observed close correlation between imaging and gross pathology in patients with both unifocal and multifocal tumours. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Pernet, Cyril R.; Wilcox, Rand; Rousselet, Guillaume A.
2012-01-01
Pearson’s correlation measures the strength of the association between two variables. The technique is, however, restricted to linear associations and is overly sensitive to outliers. Indeed, a single outlier can result in a highly inaccurate summary of the data. Yet, it remains the most commonly used measure of association in psychology research. Here we describe a free Matlab(R) based toolbox (http://sourceforge.net/projects/robustcorrtool/) that computes robust measures of association between two or more random variables: the percentage-bend correlation and skipped-correlations. After illustrating how to use the toolbox, we show that robust methods, where outliers are down weighted or removed and accounted for in significance testing, provide better estimates of the true association with accurate false positive control and without loss of power. The different correlation methods were tested with normal data and normal data contaminated with marginal or bivariate outliers. We report estimates of effect size, false positive rate and power, and advise on which technique to use depending on the data at hand. PMID:23335907
Pernet, Cyril R; Wilcox, Rand; Rousselet, Guillaume A
2012-01-01
Pearson's correlation measures the strength of the association between two variables. The technique is, however, restricted to linear associations and is overly sensitive to outliers. Indeed, a single outlier can result in a highly inaccurate summary of the data. Yet, it remains the most commonly used measure of association in psychology research. Here we describe a free Matlab((R)) based toolbox (http://sourceforge.net/projects/robustcorrtool/) that computes robust measures of association between two or more random variables: the percentage-bend correlation and skipped-correlations. After illustrating how to use the toolbox, we show that robust methods, where outliers are down weighted or removed and accounted for in significance testing, provide better estimates of the true association with accurate false positive control and without loss of power. The different correlation methods were tested with normal data and normal data contaminated with marginal or bivariate outliers. We report estimates of effect size, false positive rate and power, and advise on which technique to use depending on the data at hand.
NASA Astrophysics Data System (ADS)
Gkillas (Gillas), Konstantinos; Vortelinos, Dimitrios I.; Saha, Shrabani
2018-02-01
This paper investigates the properties of realized volatility and correlation series in the Indian stock market by employing daily data converting to monthly frequency of five different stock indices from January 2, 2006 to November 30, 2014. Using non-parametric estimation technique the properties examined include normality, long-memory, asymmetries, jumps, and heterogeneity. The realized volatility is a useful technique which provides a relatively accurate measure of volatility based on the actual variance which is beneficial for asset management in particular for non-speculative funds. The results show that realized volatility and correlation series are not normally distributed, with some evidence of persistence. Asymmetries are also evident in both volatilities and correlations. Both jumps and heterogeneity properties are significant; whereas, the former is more significant than the latter. The findings show that properties of volatilities and correlations in Indian stock market have similarities as that show in the stock markets in developed countries such as the stock market in the United States which is more prevalent for speculative business traders.
Mapping brain activity in gradient-echo functional MRI using principal component analysis
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Singh, Manbir; Don, Manuel
1997-05-01
The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
Oxygen measurement by multimode diode lasers employing gas correlation spectroscopy.
Lou, Xiutao; Somesfalean, Gabriel; Chen, Bin; Zhang, Zhiguo
2009-02-10
Multimode diode laser (MDL)-based correlation spectroscopy (COSPEC) was used to measure oxygen in ambient air, thereby employing a diode laser (DL) having an emission spectrum that overlaps the oxygen absorption lines of the A band. A sensitivity of 700 ppm m was achieved with good accuracy (2%) and linearity (R(2)=0.999). For comparison, measurements of ambient oxygen were also performed by tunable DL absorption spectroscopy (TDLAS) technique employing a vertical cavity surface emitting laser. We demonstrate that, despite slightly degraded sensitivity, the MDL-based COSPEC-based oxygen sensor has the advantages of high stability, low cost, ease-of-use, and relaxed requirements in component selection and instrument buildup compared with the TDLAS-based instrument.
Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D
2016-06-01
A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonigan, Andrew M.; Arutt, Charles N.; Parma, Edward J.
For this research, a bipolar-transistor-based sensor technique has been used to compare silicon displacement damage from known and unknown neutron energy spectra generated in nuclear reactor and high-energy-density physics environments. The technique has been shown to yield 1-MeV(Si) equivalent neutron fluence measurements comparable to traditional neutron activation dosimetry. This study significantly extends previous results by evaluating three types of bipolar devices utilized as displacement damage sensors at a nuclear research reactor and at a Pelletron particle accelerator. Ionizing dose effects are compensated for via comparisons with 10-keV x-ray and/or cobalt-60 gamma ray irradiations. Non-ionizing energy loss calculations adequately approximate themore » correlations between particle-device responses and provide evidence for the use of one particle type to screen the sensitivity of the other.« less
Tonigan, Andrew M.; Arutt, Charles N.; Parma, Edward J.; ...
2017-11-16
For this research, a bipolar-transistor-based sensor technique has been used to compare silicon displacement damage from known and unknown neutron energy spectra generated in nuclear reactor and high-energy-density physics environments. The technique has been shown to yield 1-MeV(Si) equivalent neutron fluence measurements comparable to traditional neutron activation dosimetry. This study significantly extends previous results by evaluating three types of bipolar devices utilized as displacement damage sensors at a nuclear research reactor and at a Pelletron particle accelerator. Ionizing dose effects are compensated for via comparisons with 10-keV x-ray and/or cobalt-60 gamma ray irradiations. Non-ionizing energy loss calculations adequately approximate themore » correlations between particle-device responses and provide evidence for the use of one particle type to screen the sensitivity of the other.« less
Dynamic multifactor clustering of financial networks
NASA Astrophysics Data System (ADS)
Ross, Gordon J.
2014-02-01
We investigate the tendency for financial instruments to form clusters when there are multiple factors influencing the correlation structure. Specifically, we consider a stock portfolio which contains companies from different industrial sectors, located in several different countries. Both sector membership and geography combine to create a complex clustering structure where companies seem to first be divided based on sector, with geographical subclusters emerging within each industrial sector. We argue that standard techniques for detecting overlapping clusters and communities are not able to capture this type of structure and show how robust regression techniques can instead be used to remove the influence of both sector and geography from the correlation matrix separately. Our analysis reveals that prior to the 2008 financial crisis, companies did not tend to form clusters based on geography. This changed immediately following the crisis, with geography becoming a more important determinant of clustering structure.
Proton magnetic resonance spectroscopy for assessment of human body composition.
Kamba, M; Kimura, K; Koda, M; Ogawa, T
2001-02-01
The usefulness of magnetic resonance spectroscopy (MRS)-based techniques for assessment of human body composition has not been established. We compared a proton MRS-based technique with the total body water (TBW) method to determine the usefulness of the former technique for assessment of human body composition. Proton magnetic resonance spectra of the chest to abdomen, abdomen to pelvis, and pelvis to thigh regions were obtained from 16 volunteers by using single, free induction decay measurement with a clinical magnetic resonance system operating at 1.5 T. The MRS-derived metabolite ratio was determined as the ratio of fat methyl and methylene proton resonance to water proton resonance. The peak areas for the chest to abdomen and the pelvis to thigh regions were normalized to an external reference (approximately 2200 g benzene) and a weighted average of the MRS-derived metabolite ratios for the 2 positions was calculated. TBW for each subject was determined by the deuterium oxide dilution technique. The MRS-derived metabolite ratios were significantly correlated with the ratio of body fat to lean body mass estimated by TBW. The MRS-derived metabolite ratio for the abdomen to pelvis region correlated best with the ratio of body fat to lean body mass on simple regression analyses (r = 0.918). The MRS-derived metabolite ratio for the abdomen to pelvis region and that for the pelvis to thigh region were selected for a multivariate regression model (R = 0.947, adjusted R(2) = 0.881). This MRS-based technique is sufficiently accurate for assessment of human body composition.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Saager, Rolf B.; Balu, Mihaela; Crosignani, Viera; Sharif, Ata; Durkin, Anthony J.; Kelly, Kristen M.; Tromberg, Bruce J.
2015-01-01
Abstract. The combined use of nonlinear optical microscopy and broadband reflectance techniques to assess melanin concentration and distribution thickness in vivo over the full range of Fitzpatrick skin types is presented. Twelve patients were measured using multiphoton microscopy (MPM) and spatial frequency domain spectroscopy (SFDS) on both dorsal forearm and volar arm, which are generally sun-exposed and non-sun-exposed areas, respectively. Both MPM and SFDS measured melanin volume fractions between ∼5% (skin type I non-sun-exposed) and 20% (skin type VI sun exposed). MPM measured epidermal (anatomical) thickness values ∼30–65 μm, while SFDS measured melanin distribution thickness based on diffuse optical path length. There was a strong correlation between melanin concentration and melanin distribution (epidermal) thickness measurements obtained using the two techniques. While SFDS does not have the ability to match the spatial resolution of MPM, this study demonstrates that melanin content as quantified using SFDS is linearly correlated with epidermal melanin as measured using MPM (R2=0.8895). SFDS melanin distribution thickness is correlated to MPM values (R2=0.8131). These techniques can be used individually and/or in combination to advance our understanding and guide therapies for pigmentation-related conditions as well as light-based treatments across a full range of skin types. PMID:26065839
NASA Astrophysics Data System (ADS)
Bugaychuk, Svitlana A.; Gnatovskyy, Vladimir O.; Sidorenko, Andrey V.; Pryadko, Igor I.; Negriyko, Anatoliy M.
2015-11-01
New approach for the correlation technique, which is based on multiple periodic structures to create a controllable angular spectrum, is proposed and investigated both theoretically and experimentally. The transformation of an initial laser beam occurs due to the actions of consecutive phase periodic structures, which may differ by their parameters. Then, after the Fourier transformation of a complex diffraction field, the output diffraction orders will be changed both by their intensities and by their spatial position. The controllable change of output angular spectrum is carried out by a simple control of the parameters of the periodic structures. We investigate several simple examples of such management.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Near-surface tomography of southern California from noise cross-correlation H/V measurements
NASA Astrophysics Data System (ADS)
Muir, J. B.; Tsai, V. C.
2016-12-01
The development of noise cross-correlation techniques constitutes one of the major advances in observational seismology in the past 15 years. The first data derived from noise cross correlations were surface wave phase velocities, but as the technique matures many more observables of noise cross-correlations are being used in seismic studies. One such observable is the horizontal-to-vertical amplitude ratio (H/V) of noise cross-correlations. We interpret the H/V ratio of noise cross correlations in terms of Rayleigh wave ellipticity. We have inverted the H/V of Rayleigh waves observed in noise cross-correlation signals to develop a 3D tomogram of Southern California. This technique has recently been employed (e.g. Lin et al. 2014) on a continental scale, using data from the Transportable Array in the period range of 8-24s. The finer inter-station spacing of the SCSN allows retrieval of high signal-to-noise ratio Rayleigh waves at a period of as low as 2s, significantly improving the vertical resolution of the resulting tomography. In addition, horizontal resolution is naturally improved by increased station density. This study constitutes a useful addition to traditional phase-velocity based tomographic inversions due to the localized sensitivity of H/V measurements to the near surface of the measurement station site. The continuous data of 222 permanent broadband stations of the Southern California Seismic Network (SCSN) were used in production of noise cross-correlation waveforms, resulting in a spatially dense set of measurements for the Southern California region in the 2-15s period band. Tectonic sub-regions including the LA Basin and Salton Trough are clearly visible due to their high short-period H/V ratios, whilst the Transverse and Peninsular ranges exhibit low H/V at all periods.
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1995-01-01
Particle Image Velocimetry provides a means of measuring the instantaneous 2-component velocity field across a planar region of a seeded flowfield. In this work only two camera, single exposure images are considered where both cameras have the same view of the illumination plane. Two competing techniques which yield unambiguous velocity vector direction information have been widely used for reducing the single exposure, multiple image data: cross-correlation and particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. The correlation technique requires identification of the correlation peak on the correlation plane corresponding to the average displacement of particles across the subregion. Noise on the images and particle dropout contribute to spurious peaks on the correlation plane, leading to misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak on the correlation plane, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus velocity. The advantage of this technique is the improved spatial resolution which is available from the particle tracking operation. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two staged approach offers a velocimetric technique capable of measuring particle velocities with high spatial resolution over a broad range of seeding densities.
Anomalous Quantum Correlations of Squeezed Light
NASA Astrophysics Data System (ADS)
Kühn, B.; Vogel, W.; Mraz, M.; Köhnke, S.; Hage, B.
2017-04-01
Three different noise moments of field strength, intensity, and their correlations are simultaneously measured. For this purpose a homodyne cross-correlation measurement [1] is implemented by superimposing the signal field and a weak local oscillator on an unbalanced beam splitter. The relevant information is obtained via the intensity noise correlation of the output modes. Detection details like quantum efficiencies or uncorrelated dark noise are meaningless for our technique. Yet unknown insight in the quantumness of a squeezed signal field is retrieved from the anomalous moment, correlating field strength with intensity noise. A classical inequality including this moment is violated for almost all signal phases. Precognition on quantum theory is superfluous, as our analysis is solely based on classical physics.
Intracellular applications of fluorescence correlation spectroscopy: prospects for neuroscience.
Kim, Sally A; Schwille, Petra
2003-10-01
Based on time-averaging fluctuation analysis of small fluorescent molecular ensembles in equilibrium, fluorescence correlation spectroscopy has recently been applied to investigate processes in the intracellular milieu. The exquisite sensitivity of fluorescence correlation spectroscopy provides access to a multitude of measurement parameters (rates of diffusion, local concentration, states of aggregation and molecular interactions) in real time with fast temporal and high spatial resolution. The introduction of dual-color cross-correlation, imaging, two-photon excitation, and coincidence analysis coupled with fluorescence correlation spectroscopy has expanded the utility of the technique to encompass a wide range of promising applications in living cells that may provide unprecedented insight into understanding the molecular mechanisms of intracellular neurobiological processes.
Ackerman, Joshua T.; Eagles-Smith, Collin A.
2010-01-01
Floating bird eggs to estimate their age is a widely used technique, but few studies have examined its accuracy throughout incubation. We assessed egg flotation for estimating hatch date, day of incubation, and the embryo's developmental age in eggs of the American Avocet (Recurvirostra americana), Black-necked Stilt (Himantopus mexicanus), and Forster's Tern (Sterna forsteri). Predicted hatch dates based on egg flotation during our first visit to a nest were highly correlated with actual hatch dates (r = 0.99) and accurate within 2.3 ± 1.7 (SD) days. Age estimates based on flotation were correlated with both day of incubation (r = 0.96) and the embryo's developmental age (r = 0.86) and accurate within 1.3 ± 1.6 days and 1.9 ± 1.6 days, respectively. However, the technique's accuracy varied substantially throughout incubation. Flotation overestimated the embryo's developmental age between 3 and 9 days, underestimated age between 12 and 21 days, and was most accurate between 0 and 3 days and 9 and 12 days. Age estimates based on egg flotation were generally accurate within 3 days until day 15 but later in incubation were biased progressively lower. Egg flotation was inaccurate and overestimated embryo age in abandoned nests (mean error: 7.5 ± 6.0 days). The embryo's developmental age and day of incubation were highly correlated (r = 0.94), differed by 2.1 ± 1.6 days, and resulted in similar assessments of the egg-flotation technique. Floating every egg in the clutch and refloating eggs at subsequent visits to a nest can refine age estimates.
Ackerman, Joshua T.; Eagles-Smith, Collin A.
2010-01-01
Floating bird eggs to estimate their age is a widely used technique, but few studies have examined its accuracy throughout incubation. We assessed egg flotation for estimating hatch date, day of incubation, and the embryo's developmental age in eggs of the American Avocet (Recurvirostra americana), Black-necked Stilt (Himantopus mexicanus), and Forster's Tern (Sterna forsteri). Predicted hatch dates based on egg flotation during our first visit to a nest were highly correlated with actual hatch dates (r = 0.99) and accurate within 2.3 ?? 1.7 (SD) days. Age estimates based on flotation were correlated with both day of incubation (r = 0.96) and the embryo's developmental age (r = 0.86) and accurate within 1.3 ?? 1.6 days and 1.9 ?? 1.6 days, respectively. However, the technique's accuracy varied substantially throughout incubation. Flotation overestimated the embryo's developmental age between 3 and 9 days, underestimated age between 12 and 21 days, and was most accurate between 0 and 3 days and 9 and 12 days. Age estimates based on egg flotation were generally accurate within 3 days until day 15 but later in incubation were biased progressively lower. Egg flotation was inaccurate and overestimated embryo age in abandoned nests (mean error: 7.5 ?? 6.0 days). The embryo's developmental age and day of incubation were highly correlated (r = 0.94), differed by 2.1 ?? 1.6 days, and resulted in similar assessments of the egg-flotation technique. Floating every egg in the clutch and refloating eggs at subsequent visits to a nest can refine age estimates. ?? The Cooper Ornithological Society 2010.
Characterizing and estimating noise in InSAR and InSAR time series with MODIS
Barnhart, William D.; Lohman, Rowena B.
2013-01-01
InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.
A Method For The Verification Of Wire Crimp Compression Using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Cramer, K. E.; Perey, Daniel F.; Yost, William t.
2010-01-01
The development of a new ultrasonic measurement technique to assess quantitatively wire crimp terminations is discussed. The amplitude change of a compressional ultrasonic wave propagating at right angles to the wire axis and through the junction of a crimp termination is shown to correlate with the results of a destructive pull test, which is a standard for assessing crimp wire junction quality. To demonstrate the technique, the case of incomplete compression of crimped connections is ultrasonically tested, and the results are correlated with pull tests. Results show that the nondestructive ultrasonic measurement technique consistently predicts good crimps when the ultrasonic transmission is above a certain threshold amplitude level. A quantitative measure of the quality of the crimped connection based on the ultrasonic energy transmitted is shown to respond accurately to crimp quality. A wave propagation model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying this technique while wire crimps are installed is also presented. The instrument is based on a two-jaw type crimp tool suitable for butt-splice type connections. A comparison of the results of two different instruments is presented and shows reproducibility between instruments within a 95% confidence bound.
An analysis of a digital variant of the Trail Making Test using machine learning techniques.
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. Using digital Trail Making Test (dTMT) data collected from (N = 54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. Predicted TMT scores correlate well with clinical digital test scores (r = 0.98) and paper time to completion scores (r = 0.65). Predicted TICS exhibited a small correlation with clinically derived TICS scores (r = 0.12 Part A, r = 0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically derived FAB scores (r = 0.13 Part A, r = 0.29 for Part B). Digitally derived features were also used to predict diagnosis (AUC of 0.65). Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT's additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone.
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
Dai, Yunchao; Nasir, Mubasher; Zhang, Yulin; Gao, Jiakai; Lv, Yamin; Lv, Jialong
2018-01-01
Several predictive models and methods have been used for heavy metals bioavailability, but there is no universally accepted approach in evaluating the bioavailability of arsenic (As) in soil. The technique of diffusive gradients in thin-films (DGT) is a promising tool, but there is a considerable debate with respect to its suitability. The DGT method was compared with other traditional chemical extractions techniques (soil solution, NaHCO 3 , NH 4 Cl, HCl, and total As method) for estimating As bioavailability in soil based on a greenhouse experiment using Brassica chinensis grown in various soils from 15 provinces in China. In addition, we assessed whether these methods are independent of soil properties. The correlations between plant and soil As concentration measured with traditional extraction techniques were pH and iron oxide (Fe ox ) dependent, indicating that these methods are influenced by soil properties. In contrast, DGT measurements were independent of soil properties and also showed a better correlation coefficient than other traditional techniques. Thus, DGT technique is superior to traditional techniques and should be preferable for evaluating As bioavailability in different type of soils. Copyright © 2017 Elsevier Ltd. All rights reserved.
2014-01-01
Background Inflammatory mediators can serve as biomarkers for the monitoring of the disease progression or prognosis in many conditions. In the present study we introduce an adaptation of a membrane-based technique in which the level of up to 40 cytokines and chemokines can be determined in both human and rodent blood in a semi-quantitative way. The planar assay was modified using the LI-COR (R) detection system (fluorescence based) rather than chemiluminescence and semi-quantitative outcomes were achieved by normalizing the outcomes using the automated exposure settings of the Odyssey readout device. The results were compared to the gold standard assay, namely ELISA. Results The improved planar assay allowed the detection of a considerably higher number of analytes (n = 30 and n = 5 for fluorescent and chemiluminescent detection, respectively). The improved planar method showed high sensitivity up to 17 pg/ml and a linear correlation of the normalized fluorescence intensity with the results from the ELISA (r = 0.91). Conclusions The results show that the membrane-based technique is a semi-quantitative assay that correlates satisfactorily to the gold standard when enhanced by the use of fluorescence and subsequent semi-quantitative analysis. This promising technique can be used to investigate inflammatory profiles in multiple conditions, particularly in studies with constraints in sample sizes and/or budget. PMID:25022797
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
Life-assessment technique for nuclear power plant cables
NASA Astrophysics Data System (ADS)
Bartoníček, B.; Hnát, V.; Plaček, V.
1998-06-01
The condition of polymer-based cable material can be best characterized by measuring elongation at break of its insulating materials. However, it is not often possible to take sufficiently large samples for measurement with the tensile testing machine. The problem has been conveniently solved by utilizing differential scanning calorimetry technique. From the tested cable, several microsamples are taken and the oxidation induction time (OIT) is determined. For each cable which is subject to the assessment of the lifetime, the correlation of OIT with elongation at break and the correlation of elongation at break with the cable service time has to be performed. A reliable assessment of the cable lifetime depends on accuracy of these correlations. Consequently, synergistic effects well known at this time - dose rate effects and effects resulting from the different sequence of applying radiation and elevated temperature must be taken into account.
NASA Astrophysics Data System (ADS)
Acquisti, Claudia; Allegrini, Paolo; Bogani, Patrizia; Buiatti, Marcello; Catanese, Elena; Fronzoni, Leone; Grigolini, Paolo; Mersi, Giuseppe; Palatella, Luigi
2004-04-01
We investigate on a possible way to connect the presence of Low-Complexity Sequences (LCS) in DNA genomes and the nonstationary properties of base correlations. Under the hypothesis that these variations signal a change in the DNA function, we use a new technique, called Non-Stationarity Entropic Index (NSEI) method, and we prove that this technique is an efficient way to detect functional changes with respect to a random baseline. The remarkable aspect is that NSEI does not imply any training data or fitting parameter, the only arbitrarity being the choice of a marker in the sequence. We make this choice on the basis of biological information about LCS distributions in genomes. We show that there exists a correlation between changing the amount in LCS and the ratio of long- to short-range correlation.
Image correlation method for DNA sequence alignment.
Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván
2012-01-01
The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.
Moura, Felipe Arruda; van Emmerik, Richard E A; Santana, Juliana Exel; Martins, Luiz Eduardo Barreto; Barros, Ricardo Machado Leite de; Cunha, Sergio Augusto
2016-12-01
The purpose of this study was to investigate the coordination between teams spread during football matches using cross-correlation and vector coding techniques. Using a video-based tracking system, we obtained the trajectories of 257 players during 10 matches. Team spread was calculated as functions of time. For a general coordination description, we calculated the cross-correlation between the signals. Vector coding was used to identify the coordination patterns between teams during offensive sequences that ended in shots on goal or defensive tackles. Cross-correlation showed that opponent teams have a tendency to present in-phase coordination, with a short time lag. During offensive sequences, vector coding results showed that, although in-phase coordination dominated, other patterns were observed. We verified that during the early stages, offensive sequences ending in shots on goal present greater anti-phase and attacking team phase periods, compared to sequences ending in tackles. Results suggest that the attacking team may seek to present a contrary behaviour of its opponent (or may lead the adversary behaviour) in the beginning of the attacking play, regarding to the distribution strategy, to increase the chances of a shot on goal. The techniques allowed detecting the coordination patterns between teams, providing additional information about football dynamics and players' interaction.
Gas detection by correlation spectroscopy employing a multimode diode laser.
Lou, Xiutao; Somesfalean, Gabriel; Zhang, Zhiguo
2008-05-01
A gas sensor based on the gas-correlation technique has been developed using a multimode diode laser (MDL) in a dual-beam detection scheme. Measurement of CO(2) mixed with CO as an interfering gas is successfully demonstrated using a 1570 nm tunable MDL. Despite overlapping absorption spectra and occasional mode hops, the interfering signals can be effectively excluded by a statistical procedure including correlation analysis and outlier identification. The gas concentration is retrieved from several pair-correlated signals by a linear-regression scheme, yielding a reliable and accurate measurement. This demonstrates the utility of the unsophisticated MDLs as novel light sources for gas detection applications.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Wire Crimp Connectors Verification using Ultrasonic Inspection
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Perey, Daniel F.; Yost, William T.
2007-01-01
The development of a new ultrasonic measurement technique to quantitatively assess wire crimp connections is discussed. The amplitude change of a compressional ultrasonic wave propagating through the junction of a crimp connector and wire is shown to correlate with the results of a destructive pull test, which previously has been used to assess crimp wire junction quality. Various crimp junction pathologies (missing wire strands, incorrect wire gauge, incomplete wire insertion in connector) are ultrasonically tested, and their results are correlated with pull tests. Results show that the ultrasonic measurement technique consistently (as evidenced with pull-testing data) predicts good crimps when ultrasonic transmission is above a certain threshold amplitude level. A physics-based model, solved by finite element analysis, describes the compressional ultrasonic wave propagation through the junction during the crimping process. This model is in agreement within 6% of the ultrasonic measurements. A prototype instrument for applying the technique while wire crimps are installed is also presented.
Wang, Xueju; Pan, Zhipeng; Fan, Feifei; ...
2015-09-10
We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for themore » analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. As a result, the DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.« less
Feature-extracted joint transform correlation.
Alam, M S
1995-12-10
A new technique for real-time optical character recognition that uses a joint transform correlator is proposed. This technique employs feature-extracted patterns for the reference image to detect a wide range of characters in one step. The proposed technique significantly enhances the processing speed when compared with the presently available joint transform correlator architectures and shows feasibility for multichannel joint transform correlation.
Nearest neighbor imputation using spatial–temporal correlations in wireless sensor networks
Li, YuanYuan; Parker, Lynne E.
2016-01-01
Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network’s performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a kd-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for kd-tree construction, and Euclidean distance for kd-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental results show that our proposed 𝒦-NN imputation method has a competitive accuracy with state-of-the-art Expectation–Maximization (EM) techniques, while using much simpler computational techniques, thus making it suitable for use in resource-constrained WSNs. PMID:28435414
Xiao, Xia; Lei, Kin Fong; Huang, Chia-Hao
2015-01-01
Cell migration is a cellular response and results in various biological processes such as cancer metastasis, that is, the primary cause of death for cancer patients. Quantitative investigation of the correlation between cell migration and extracellular stimulation is essential for developing effective therapeutic strategies for controlling invasive cancer cells. The conventional method to determine cell migration rate based on comparison of successive images may not be an objective approach. In this work, a microfluidic chip embedded with measurement electrodes has been developed to quantitatively monitor the cell migration activity based on the impedimetric measurement technique. A no-damage wound was constructed by microfluidic phenomenon and cell migration activity under the stimulation of cytokine and an anti-cancer drug, i.e., interleukin-6 and doxorubicin, were, respectively, investigated. Impedance measurement was concurrently performed during the cell migration process. The impedance change was directly correlated to the cell migration activity; therefore, the migration rate could be calculated. In addition, a good match was found between impedance measurement and conventional imaging analysis. But the impedimetric measurement technique provides an objective and quantitative measurement. Based on our technique, cell migration rates were calculated to be 8.5, 19.1, and 34.9 μm/h under the stimulation of cytokine at concentrations of 0 (control), 5, and 10 ng/ml. This technique has high potential to be developed into a powerful analytical platform for cancer research. PMID:26180566
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Ultrasound Imaging Velocimetry: a review
NASA Astrophysics Data System (ADS)
Poelma, Christian
2017-01-01
Whole-field velocity measurement techniques based on ultrasound imaging (a.k.a. `ultrasound imaging velocimetry' or `echo-PIV') have received significant attention from the fluid mechanics community in the last decade, in particular because of their ability to obtain velocity fields in flows that elude characterisation by conventional optical methods. In this review, an overview is given of the history, typical components and challenges of these techniques. The basic principles of ultrasound image formation are summarised, as well as various techniques to estimate flow velocities; the emphasis is on correlation-based techniques. Examples are given for a wide range of applications, including in vivo cardiovascular flow measurements, the characterisation of sediment transport and the characterisation of complex non-Newtonian fluids. To conclude, future opportunities are identified. These encompass not just optimisation of the accuracy and dynamic range, but also extension to other application areas.
Thinking outside the ROCs: Designing decorrelated taggers (DDT) for jet substructure
Dolen, James; Harris, Philip; Marzani, Simone; ...
2016-05-26
Here, we explore the scale-dependence and correlations of jet substructure observables to improve upon existing techniques in the identification of highly Lorentz-boosted objects. Modified observables are designed to remove correlations from existing theoretically well-understood observables, providing practical advantages for experimental measurements and searches for new phenomena. We study such observables in W jet tagging and provide recommendations for observables based on considerations beyond signal and background efficiencies.
Link Correlated Military Data for Better Decision Support
2011-06-01
automatically translated into URI based links, thus can greatly reduce man power cost on software development. 3 Linked Data Technique Tim Berners - Lee ...Linked Data - while Linked Data is usually considered as part of Semantic Web, or “the Semantic Web done right” as described by Tim himself - has been...Required data of automatic link construction mechanism on more kinds of correlations. References [1] B. L. Tim , “The next Web of open, linked data
CORRELATIONS IN LIGHT FROM A LASER AT THRESHOLD,
Temporal correlations in the electromagnetic field radiated by a laser in the threshold region of oscillation (from one tenth of threshold intensity...to ten times threshold ) were measured by photoelectron counting techniques. The experimental results were compared with theoretical predictions based...shows that the intensity fluctuations at about one tenth threshold are nearly those of a Gaussian field and continuously approach those of a constant amplitude field as the intensity is increased. (Author)
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.
Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.
Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology
Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander
2015-01-01
Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250
Stability and phylogenetic correlation in gut microbiota: lessons from ants and apes.
Sanders, Jon G; Powell, Scott; Kronauer, Daniel J C; Vasconcelos, Heraldo L; Frederickson, Megan E; Pierce, Naomi E
2014-03-01
Correlation between gut microbiota and host phylogeny could reflect codiversification over shared evolutionary history or a selective environment that is more similar in related hosts. These alternatives imply substantial differences in the relationship between host and symbiont, but can they be distinguished based on patterns in the community data themselves? We explored patterns of phylogenetic correlation in the distribution of gut bacteria among species of turtle ants (genus Cephalotes), which host a dense gut microbial community. We used 16S rRNA pyrosequencing from 25 Cephalotes species to show that their gut community is remarkably stable, from the colony to the genus level. Despite this overall similarity, the existing differences among species' microbiota significantly correlated with host phylogeny. We introduced a novel analytical technique to test whether these phylogenetic correlations are derived from recent bacterial evolution, as would be expected in the case of codiversification, or from broader shifts more likely to reflect environmental filters imposed by factors such as diet or habitat. We also tested this technique on a published data set of ape microbiota, confirming earlier results while revealing previously undescribed patterns of phylogenetic correlation. Our results indicated a high degree of partner fidelity in the Cephalotes microbiota, suggesting that vertical transmission of the entire community could play an important role in the evolution and maintenance of the association. As additional comparative microbiota data become available, the techniques presented here can be used to explore trends in the evolution of host-associated microbial communities. © 2014 John Wiley & Sons Ltd.
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Cavitation in liquid cryogens. 2: Hydrofoil
NASA Technical Reports Server (NTRS)
Hord, J.
1973-01-01
Boundary layer principles, along with two-phase concepts, are used to improve existing correlative theory for developed cavity data. Details concerning cavity instrumentation, data analysis, correlative techniques, and experimental and theoretical aspects of a cavitating hydrofoil are given. Both desinent and thermodynamic data, using liquid hydrogen and liquid nitrogen, are reported. The thermodynamic data indicated that stable thermodynamic equilibrium exists throughout the vaporous cryogen cavities. The improved correlative formulas were used to evaluate these data. A new correlating parameter based on consideration of mass limiting two-phase flow flux across the cavity interface, is proposed. This correlating parameter appears attractive for future correlative and predictive applications. Agreement between theory and experiment is discussed, and directions for future analysis are suggested. The front half of the cavities, developed on the hydrofoil, may be considered as parabolically shaped.
Colorstratigraphy; A New Stratigraphic Correlation Technique
NASA Astrophysics Data System (ADS)
Nanayakkara, N. U.; Ranasinghage, P. N.; Priyantha, C.; Abillapitiya, T.
2016-12-01
Here we introduce a novel stratigraphic technique namely colorstratigraphy for correlating sedimentary sequences. Minihagalkanda is about 1 km long amphitheater like sedimentary terrain, situated at the southeastern coast of Sri Lanka. It has Miocene sedimentary sequences, separated in to 10-12 m high small hillocks by erosion, and bounded by about 30 m high escarpment. Sandstone, yellowish sandy clay, greenish silty clay sequences are capped by 4-5 m limestone bed in these hillocks but not at the boundary escarpment. Stratigraphic profiles at two hillocks and the boundary escarpment, separated each other by 200-300 m, were selected to test the new colorstartigraphic correlation technique. Color reflectance (DSR) was measured at four samples in each sequence at every profile and hence altogether 36 reflectance measurements were taken using Minolta 2500D hand-held color spectrophotometer. The first-derivative of the reflectance spectra (dR/dλ) defines the "spectral shape" of the sample. Therefore, DSR data (360-740 nm) measured at 10 nm resolution were used to calculate a center-weighted, first-derivative spectra for each reflectance sample consisting of 39 channels. Particle size of each sequence was measured at all 03 profiles using laser particle size analyzer to verify the stratigraphic correlation. Mean reflectance spectrum for each sequence at all 03 profiles were plotted on the same graph for comparison. Same was done for the grain size spectrums. Discriminant function analysis was performed separately for dsr data and grain size data using a number assigned to each sedimentary sequence as the grouping variable Color spectrums of sandstone, yellowish sandy clay, and greenish silty clay sequences at all three profiles perfectly match showing clear stratigraphic correlation among these three stratigraphic profiles. Matching grain size distribution curves of the three sequence at the three profiles verify the stratigraphic correlation. Perfect 100 % discrimination of the three sequences with color reflectance data proves the accuracy of the correlation. Similar 100 % discrimination resulted with grain size data further verifies the results. Therefore, colorstratigraphy based on DSR can be introduced as a quick and easy technique for stratigraphic correlation of sedimentary sequences.
The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Yan, Tsun-Yee
2000-01-01
This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.
Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh
2017-01-01
The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Khan, Ajmal
1993-01-01
Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.
NASA Technical Reports Server (NTRS)
Clem, Michelle M.; Abdul-Aziz, Ali; Woike, Mark R.; Fralick, Gustave C.
2015-01-01
The modern turbine engine operates in a harsh environment at high speeds and is repeatedly exposed to combined high mechanical and thermal loads. The cumulative effects of these external forces lead to high stresses and strains on the engine components, such as the rotating turbine disks, which may eventually lead to a catastrophic failure if left undetected. The operating environment makes it difficult to use conventional strain gauges, therefore, non-contact strain measurement techniques is of interest to NASA and the turbine engine community. This presentation describes one such approach; the use of cross correlation analysis to measure strain experienced by the engine turbine disk with the goal of assessing potential faults and damage.
NASA Astrophysics Data System (ADS)
Berthias, F.; Feketeová, L.; Della Negra, R.; Dupasquier, T.; Fillol, R.; Abdoul-Carime, H.; Farizon, B.; Farizon, M.; Märk, T. D.
2018-01-01
The combination of the Dispositif d'Irradiation d'Agrégats Moléculaire with the correlated ion and neutral time of flight-velocity map imaging technique provides a new way to explore processes occurring subsequent to the excitation of charged nano-systems. The present contribution describes in detail the methods developed for the quantitative measurement of branching ratios and cross sections for collision-induced dissociation processes of water cluster nano-systems. These methods are based on measurements of the detection efficiency of neutral fragments produced in these dissociation reactions. Moreover, measured detection efficiencies are used here to extract the number of neutral fragments produced for a given charged fragment.
NASA Technical Reports Server (NTRS)
Gangwani, S. T.
1985-01-01
A reliable rotor aeroelastic analysis operational that correctly predicts the vibration levels for a helicopter is utilized to test various unsteady aerodynamics models with the objective of improving the correlation between test and theory. This analysis called Rotor Aeroelastic Vibration (RAVIB) computer program is based on a frequency domain forced response analysis which utilizes the transfer matrix techniques to model helicopter/rotor dynamic systems of varying degrees of complexity. The results for the AH-1G helicopter rotor were compared with the flight test data during high speed operation and they indicated a reasonably good correlation for the beamwise and chordwise blade bending moments, but for torsional moments the correlation was poor. As a result, a new aerodynamics model based on unstalled synthesized data derived from the large amplitude oscillating airfoil experiments was developed and tested.
NASA Astrophysics Data System (ADS)
Pacheco-Vega, Arturo
2016-09-01
In this work a new set of correlation equations is developed and introduced to accurately describe the thermal performance of compact heat exchangers with possible condensation. The feasible operating conditions for the thermal system correspond to dry- surface, dropwise condensation, and film condensation. Using a prescribed form for each condition, a global regression analysis for the best-fit correlation to experimental data is carried out with a simulated annealing optimization technique. The experimental data were taken from the literature and algorithmically classified into three groups -related to the possible operating conditions- with a previously-introduced Gaussian-mixture-based methodology. Prior to their use in the analysis, the correct data classification was assessed and confirmed via artificial neural networks. Predictions from the correlations obtained for the different conditions are within the uncertainty of the experiments and substantially more accurate than those commonly used.
Guarnieri, Adriano; Moreno-Montañés, Javier; Sabater, Alfonso L; Gosende-Chico, Inmaculada; Bonet-Farriol, Elvira
2013-11-01
To analyze the changes in incision sizes after implantation of a toric intraocular lens (IOL) using 2 methods. Department of Ophthalmology, Clínica Universidad de Navarra, Pamplona, Spain. Prospective case series. Coaxial phacoemulsification and IOL implantation through a 2.2 mm clear corneal incision using a cartridge injector were performed. Wound-assisted or cartridge-insertion techniques were used to implant the IOLs. The results were analyzed according to IOL spherical and cylindrical powers. Corneal hysteresis (CH) and the corneal resistance factor (CRF) were measured and evaluated based on the changes in incision size. Incision size increased in 30 (41.7%) of 72 eyes in the wound-assisted group and 71 (98.6%) of 72 eyes in the cartridge-insertion group. The mean incision size after IOL implantation was 2.27 mm ± 0.06 (SD) and 2.37 ± 0.05 mm, respectively (P<.01). The final incision size and IOL spherical power in the wound-assisted technique group (P=.02) and the cartridge-insertion technique group (P=.03) were correlated significantly; IOL toricity was not (P=.19 and P=.28, respectively). The CH and CRF values were not correlated with the final incision size. The final incision size and the changes in incision size after IOL implantation were greater with the cartridge-insertion technique than with the wound-assisted technique. The increase was related to IOL spherical power in both groups but not to IOL toricity. Corneal biomechanical properties were not correlated with the final incision size. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.
Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh
2016-06-01
Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.
Head ballistocardiogram based on wireless multi-location sensors.
Onizuka, Kohei; Sodini, Charles G
2015-08-01
Recently a wearable BCG monitoring technique based on an accelerometer worn at the ear was demonstrated to replace a conventional bulky BCG acquisition system. In this work, a multi-location wireless vital signs monitor was developed, and at least two common acceleration vectors correlating to sitting-BCG were found in the supine position by using head PPG signal as a reference for eight healthy human subjects. The head side amplitude in the supine position is roughly proportional to the sitting amplitude that is in turn proportional to the stroke volume. Signal processing techniques to identify J-waves in a subject having small amplitude was also developed based on the two common vectors at the head side and top.
Wass, Sam V
2014-08-01
Convergent research points to the importance of studying the ontogenesis of sustained attention during the early years of life, but little research hitherto has compared and contrasted different techniques available for measuring sustained attention. Here, we compare methods that have been used to assess one parameter of sustained attention, namely infants' peak look duration to novel stimuli. Our focus was to assess whether individual differences in peak look duration are stable across different measurement techniques. In a single cohort of 42 typically developing 11-month-old infants we assessed peak look duration using six different measurement paradigms (four screen-based, two naturalistic). Zero-order correlations suggested that individual differences in peak look duration were stable across all four screen-based paradigms, but no correlations were found between peak look durations observed on the screen-based and the naturalistic paradigms. A factor analysis conducted on the dependent variable of peak look duration identified two factors. All four screen-based tasks loaded onto the first factor, but the two naturalistic tasks did not relate, and mapped onto a different factor. Our results question how individual differences observed on screen-based tasks manifest in more ecologically valid contexts. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Wass, Sam V.
2014-01-01
Convergent research points to the importance of studying the ontogenesis of sustained attention during the early years of life, but little research hitherto has compared and contrasted different techniques available for measuring sustained attention. Here, we compare methods that have been used to assess one parameter of sustained attention, namely infants’ peak look duration to novel stimuli. Our focus was to assess whether individual differences in peak look duration are stable across different measurement techniques. In a single cohort of 42 typically developing 11-month-old infants we assessed peak look duration using six different measurement paradigms (four screen-based, two naturalistic). Zero-order correlations suggested that individual differences in peak look duration were stable across all four screen-based paradigms, but no correlations were found between peak look durations observed on the screen-based and the naturalistic paradigms. A factor analysis conducted on the dependent variable of peak look duration identified two factors. All four screen-based tasks loaded onto the first factor, but the two naturalistic tasks did not relate, and mapped onto a different factor. Our results question how individual differences observed on screen-based tasks manifest in more ecologically valid contexts. PMID:24905901
Cell refractive index for cell biology and disease diagnosis: past, present and future.
Liu, P Y; Chin, L K; Ser, W; Chen, H F; Hsieh, C-M; Lee, C-H; Sung, K-B; Ayi, T C; Yap, P H; Liedberg, B; Wang, K; Bourouina, T; Leprince-Wang, Y
2016-02-21
Cell refractive index is a key biophysical parameter, which has been extensively studied. It is correlated with other cell biophysical properties including mechanical, electrical and optical properties, and not only represents the intracellular mass and concentration of a cell, but also provides important insight for various biological models. Measurement techniques developed earlier only measure the effective refractive index of a cell or a cell suspension, providing only limited information on cell refractive index and hence hindering its in-depth analysis and correlation. Recently, the emergence of microfluidic, photonic and imaging technologies has enabled the manipulation of a single cell and the 3D refractive index of a single cell down to sub-micron resolution, providing powerful tools to study cells based on refractive index. In this review, we provide an overview of cell refractive index models and measurement techniques including microfluidic chip-based techniques for the last 50 years, present the applications and significance of cell refractive index in cell biology, hematology, and pathology, and discuss future research trends in the field, including 3D imaging methods, integration with microfluidics and potential applications in new and breakthrough research areas.
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
NASA Astrophysics Data System (ADS)
Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.
2014-05-01
Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.
Correlation of gravestone decay and air quality 1960-2010
NASA Astrophysics Data System (ADS)
Mooers, H. D.; Carlson, M. J.; Harrison, R. M.; Inkpen, R. J.; Loeffler, S.
2017-03-01
Evaluation of spatial and temporal variability in surface recession of lead-lettered Carrara marble gravestones provides a quantitative measure of acid flux to the stone surfaces and is closely related to local land use and air quality. Correlation of stone decay, land use, and air quality for the period after 1960 when reliable estimates of atmospheric pollution are available is evaluated. Gravestone decay and SO2 measurements are interpolated spatially using deterministic and geostatistical techniques. A general lack of spatial correlation was identified and therefore a land-use-based technique for correlation of stone decay and air quality is employed. Decadally averaged stone decay is highly correlated with land use averaged spatially over an optimum radius of ≈7 km even though air quality, determined by records from the UK monitoring network, is not highly correlated with gravestone decay. The relationships among stone decay, air-quality, and land use is complicated by the relatively low spatial density of both gravestone decay and air quality data and the fact that air quality data is available only as annual averages and therefore seasonal dependence cannot be evaluated. However, acid deposition calculated from gravestone decay suggests that the deposition efficiency of SO2 has increased appreciably since 1980 indicating an increase in the SO2 oxidation process possibly related to reactions with ammonia.
Lindquist, Martin A.; Xu, Yuting; Nebel, Mary Beth; Caffo, Brain S.
2014-01-01
To date, most functional Magnetic Resonance Imaging (fMRI) studies have assumed that the functional connectivity (FC) between time series from distinct brain regions is constant across time. However, recently, there has been increased interest in quantifying possible dynamic changes in FC during fMRI experiments, as it is thought this may provide insight into the fundamental workings of brain networks. In this work we focus on the specific problem of estimating the dynamic behavior of pair-wise correlations between time courses extracted from two different regions of the brain. We critique the commonly used sliding-windows technique, and discuss some alternative methods used to model volatility in the finance literature that could also prove useful in the neuroimaging setting. In particular, we focus on the Dynamic Conditional Correlation (DCC) model, which provides a model-based approach towards estimating dynamic correlations. We investigate the properties of several techniques in a series of simulation studies and find that DCC achieves the best overall balance between sensitivity and specificity in detecting dynamic changes in correlations. We also investigate its scalability beyond the bivariate case to demonstrate its utility for studying dynamic correlations between more than two brain regions. Finally, we illustrate its performance in an application to test-retest resting state fMRI data. PMID:24993894
Bredael, Gerard M; Bowers, Niya; Boulineau, Fabien; Hahn, David
2014-07-01
The ability to predict in vivo response of an oral dosage form based on an in vitro technique has been a sought after goal of the pharmaceutical scientist. Dissolution testing that demonstrates discrimination to various critical formulations or process attributes provides a sensitive quality check that may be representative or may be overpredictive of potential in vivo changes. Dissolution methodology with an established in vitro-in vivo relationship or correlation may provide the desired in vivo predictability. To establish this in vitro-in vivo link, a clinical study must be performed. In this article, recommendations are given in the selection of batches for the clinical study followed by potential outcome scenarios. The investigation of a Level C in vitro-in vivo correlation (IVIVC), which is the most common correlation for immediate-release oral dosage forms, is presented. Lastly, an IVIVC case study involving a biopharmaceutical classification system class IV compound is presented encompassing this strategy and techniques. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Finite-Difference Time-Domain Analysis of Tapered Photonic Crystal Fiber
NASA Astrophysics Data System (ADS)
Ali, M. I. Md; Sanusidin, S. N.; Yusof, M. H. M.
2018-03-01
This paper brief about the simulation of tapered photonic crystal fiber (PCF) LMA-8 single-mode type based on correlation of scattering pattern at wavelength of 1.55 μm, analyzation of transmission spectrum at wavelength over the range of 1.0 until 2.5 μm and correlation of transmission spectrum with the refractive index change in photonic crystal holes with respect to taper size of 0.1 until 1.0 using Optiwave simulation software. The main objective is to simulate using Finite-Difference Time-Domain (FDTD) technique of tapered LMA-8 PCF and for sensing application by improving the capabilities of PCF without collapsing the crystal holes. The types of FDTD techniques used are scattering pattern and transverse transmission and principal component analysis (PCA) used as a mathematical tool to model the data obtained by MathCad software. The simulation results showed that there is no obvious correlation of scattering pattern at a wavelength of 1.55 μm, a correlation obtained between taper sizes with a transverse transmission and there is a parabolic relationship between the refractive index changes inside the crystal structure.
Kempen, Paul J; Kircher, Moritz F; de la Zerda, Adam; Zavaleta, Cristina L; Jokerst, Jesse V; Mellinghoff, Ingo K; Gambhir, Sanjiv S; Sinclair, Robert
2015-01-01
The growing use of nanoparticles in biomedical applications, including cancer diagnosis and treatment, demands the capability to exactly locate them within complex biological systems. In this work a correlative optical and scanning electron microscopy technique was developed to locate and observe multi-modal gold core nanoparticle accumulation in brain tumor models. Entire brain sections from mice containing orthotopic brain tumors injected intravenously with nanoparticles were imaged using both optical microscopy to identify the brain tumor, and scanning electron microscopy to identify the individual nanoparticles. Gold-based nanoparticles were readily identified in the scanning electron microscope using backscattered electron imaging as bright spots against a darker background. This information was then correlated to determine the exact location of the nanoparticles within the brain tissue. The nanoparticles were located only in areas that contained tumor cells, and not in the surrounding healthy brain tissue. This correlative technique provides a powerful method to relate the macro- and micro-scale features visible in light microscopy with the nanoscale features resolvable in scanning electron microscopy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Alizai, Hamza; Nardo, Lorenzo; Karampinos, Dimitrios C; Joseph, Gabby B; Yap, Samuel P; Baum, Thomas; Krug, Roland; Majumdar, Sharmila; Link, Thomas M
2012-07-01
The goal of this study was to compare the semi-quantitative Goutallier classification for fat infiltration with quantitative fat-fraction derived from a magnetic resonance imaging (MRI) chemical shift-based water/fat separation technique. Sixty-two women (age 61 ± 6 years), 27 of whom had diabetes, underwent MRI of the calf using a T1-weighted fast spin-echo sequence and a six-echo spoiled gradient-echo sequence at 3 T. Water/fat images and fat fraction maps were reconstructed using the IDEAL algorithm with T2* correction and a multi-peak model for the fat spectrum. Two radiologists scored fat infiltration on the T1-weighted images using the Goutallier classification in six muscle compartments. Spearman correlations between the Goutallier grades and the fat fraction were calculated; in addition, intra-observer and inter-observer agreement were calculated. A significant correlation between the clinical grading and the fat fraction values was found for all muscle compartments (P < 0.0001, R values ranging from 0.79 to 0.88). Goutallier grades 0-4 had a fat fraction ranging from 3.5 to 19%. Intra-observer and inter-observer agreement values of 0.83 and 0.81 were calculated for the semi-quantitative grading. Semi-quantitative grading of intramuscular fat and quantitative fat fraction were significantly correlated and both techniques had excellent reproducibility. However, the clinical grading was found to overestimate muscle fat. Fat infiltration of muscle commonly occurs in many metabolic and neuromuscular diseases. • Image-based semi-quantitative classifications for assessing fat infiltration are not well validated. • Quantitative MRI techniques provide an accurate assessment of muscle fat.
Image correlation microscopy for uniform illumination.
Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L
2010-01-01
Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.
Covalent bond orders and atomic valences from correlated wavefunctions
NASA Astrophysics Data System (ADS)
Ángyán, János G.; Rosta, Edina; Surján, Péter R.
1999-01-01
A comparison is made between two alternative definitions for covalent bond orders: one derived from the exchange part of the two-particle density matrix and the other expressed as the correlation of fluctuations (covariance) of the number of electrons between the atomic centers. Although these definitions lead to identical formulae for mono-determinantal SCF wavefunctions, they predict different bond orders for correlated wavefunctions. It is shown that, in this case, the fluctuation-based definition leads to slightly lower values of the bond order than does the exchange-based definition, provided one uses an appropriate space-partitioning technique like that of Bader's topological theory of atoms in a molecule; however, use of Mulliken partitioning in this context leads to unphysical behaviour. The example of H 2 is discussed in detail.
Ritchie, David W; Kozakov, Dima; Vajda, Sandor
2008-09-01
Predicting how proteins interact at the molecular level is a computationally intensive task. Many protein docking algorithms begin by using fast Fourier transform (FFT) correlation techniques to find putative rigid body docking orientations. Most such approaches use 3D Cartesian grids and are therefore limited to computing three dimensional (3D) translational correlations. However, translational FFTs can speed up the calculation in only three of the six rigid body degrees of freedom, and they cannot easily incorporate prior knowledge about a complex to focus and hence further accelerate the calculation. Furthemore, several groups have developed multi-term interaction potentials and others use multi-copy approaches to simulate protein flexibility, which both add to the computational cost of FFT-based docking algorithms. Hence there is a need to develop more powerful and more versatile FFT docking techniques. This article presents a closed-form 6D spherical polar Fourier correlation expression from which arbitrary multi-dimensional multi-property multi-resolution FFT correlations may be generated. The approach is demonstrated by calculating 1D, 3D and 5D rotational correlations of 3D shape and electrostatic expansions up to polynomial order L=30 on a 2 GB personal computer. As expected, 3D correlations are found to be considerably faster than 1D correlations but, surprisingly, 5D correlations are often slower than 3D correlations. Nonetheless, we show that 5D correlations will be advantageous when calculating multi-term knowledge-based interaction potentials. When docking the 84 complexes of the Protein Docking Benchmark, blind 3D shape plus electrostatic correlations take around 30 minutes on a contemporary personal computer and find acceptable solutions within the top 20 in 16 cases. Applying a simple angular constraint to focus the calculation around the receptor binding site produces acceptable solutions within the top 20 in 28 cases. Further constraining the search to the ligand binding site gives up to 48 solutions within the top 20, with calculation times of just a few minutes per complex. Hence the approach described provides a practical and fast tool for rigid body protein-protein docking, especially when prior knowledge about one or both binding sites is available.
Farnebo, S; Winbladh, A; Zettersten, E K; Sandström, P; Gullstrand, P; Samuelsson, A; Theodorson, E; Sjöberg, F
2010-01-01
Delayed detection of ischemia is one of the most feared postoperative complications. Early detection of impaired blood flow and close monitoring of the organ-specific metabolic status may therefore be critical for the surgical outcome. Urea clearance is a new technique for continuous monitoring of alterations in blood flow and metabolic markers with acceptable temporal characteristics. We compare this new microdialysis technique with the established microdialysis ethanol technique to assess hepatic blood flow. Six pigs were used in a liver ischemia/reperfusion injury model. Microdialysis catheters were placed in liver segment IV and all circulation was stopped for 80 min, followed by reperfusion for 220 min. Urea and ethanol clearance was calculated from the dialysate and correlated with metabolic changes. A laser Doppler probe was used as reference of restoration of blood flow. Both urea and ethanol clearance reproducibly depicted changes in liver blood flow in relation to metabolic changes and laser Doppler measurements. The two techniques highly correlated both overall and during the reperfusion phase (r = 0.8) and the changes were paralleled by altered perfusion as recorded by laser Doppler. Copyright © 2010 S. Karger AG, Basel.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses
Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.
2017-01-01
Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude effects, respectively, in resting state connectivity analyses. We illustrate these concepts using realistic simulated resting state FMRI data and in vivo data acquired in healthy subjects and patients with bipolar disorder and schizophrenia. PMID:28348512
Chaotic CDMA watermarking algorithm for digital image in FRFT domain
NASA Astrophysics Data System (ADS)
Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng
2007-11-01
A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.
Ultrasound elastography: principles, techniques, and clinical applications.
Dewall, Ryan J
2013-01-01
Ultrasound elastography is an emerging set of imaging modalities used to image tissue elasticity and are often referred to as virtual palpation. These techniques have proven effective in detecting and assessing many different pathologies, because tissue mechanical changes often correlate with tissue pathological changes. This article reviews the principles of ultrasound elastography, many of the ultrasound-based techniques, and popular clinical applications. Originally, elastography was a technique that imaged tissue strain by comparing pre- and postcompression ultrasound images. However, new techniques have been developed that use different excitation methods such as external vibration or acoustic radiation force. Some techniques track transient phenomena such as shear waves to quantitatively measure tissue elasticity. Clinical use of elastography is increasing, with applications including lesion detection and classification, fibrosis staging, treatment monitoring, vascular imaging, and musculoskeletal applications.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
Technical Note: Detection of gas bubble leakage via correlation of water column multibeam images
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Papenberg, C.
2011-07-01
Hydroacoustic detection of natural gas release from the seafloor has been conducted in the past by using singlebeam echosounders. In contrast modern multibeam swath mapping systems allow much wider coverage, higher resolution, and offer 3-D spatial correlation. However, up to the present, the extremely high data rate hampers water column backscatter investigations. More sophisticated visualization and processing techniques for water column backscatter analysis are still under development. We here present such water column backscattering data gathered with a 50 kHz prototype multibeam system. Water column backscattering data is presented in videoframes grabbed over 75 s and a "re-sorted" singlebeam presentation. Thus individual gas bubbles rising from the 24 m deep seafloor clearly emerge in the acoustic images and rise velocities can be determined. A sophisticated processing scheme is introduced to identify those rising gas bubbles in the hydroacoustic data. It applies a cross-correlation technique similar to that used in Particle Imaging Velocimetry (PIV) to the acoustic backscatter images. Tempo-spatial drift patterns of the bubbles are assessed and match very well measured and theoretical rise patterns. The application of this processing scheme to our field data gives impressive results with respect to unambiguous bubble detection and remote bubble rise velocimetry. The method can identify and exclude the main driver for misinterpretations, i.e. fish-mediated echoes. Even though image-based cross-correlation techniques are well known in the field of fluid mechanics for high resolution and non-inversive current flow field analysis, this technique was never applied in the proposed sense for an acoustic bubble detector.
Technical Note: Detection of gas bubble leakage via correlation of water column multibeam images
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Papenberg, C.
2012-03-01
Hydroacoustic detection of natural gas release from the seafloor has been conducted in the past by using singlebeam echosounders. In contrast, modern multibeam swath mapping systems allow much wider coverage, higher resolution, and offer 3-D spatial correlation. Up to the present, the extremely high data rate hampers water column backscatter investigations and more sophisticated visualization and processing techniques are needed. Here, we present water column backscatter data acquired with a 50 kHz prototype multibeam system over a period of 75 seconds. Display types are of swath-images as well as of a "re-sorted" singlebeam presentation. Thus, individual and/or groups of gas bubbles rising from the 24 m deep seafloor clearly emerge in the acoustic images, making it possible to estimate rise velocities. A sophisticated processing scheme is introduced to identify those rising gas bubbles in the hydroacoustic data. We apply a cross-correlation technique adapted from particle imaging velocimetry (PIV) to the acoustic backscatter images. Temporal and spatial drift patterns of the bubbles are assessed and are shown to match very well to measured and theoretical rise patterns. The application of this processing to our field data gives clear results with respect to unambiguous bubble detection and remote bubble rise velocimetry. The method can identify and exclude the main source of misinterpretations, i.e. fish-mediated echoes. Although image-based cross-correlation techniques are well known in the field of fluid mechanics for high resolution and non-inversive current flow field analysis, we present the first application of this technique as an acoustic bubble detector.
Efficient live face detection to counter spoof attack in face recognition systems
NASA Astrophysics Data System (ADS)
Biswas, Bikram Kumar; Alam, Mohammad S.
2015-03-01
Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.
NASA Astrophysics Data System (ADS)
Suresh, Pooja
2014-05-01
Alloy identification of oil-borne wear debris captured on chip detectors, filters and magnetic plugs allows the machinery maintainer to assess the health of the engine or gearbox and identify specific component damage. Today, such identification can be achieved in real time using portable, at-line laser-induced breakdown spectroscopy (LIBS) and Xray fluorescence (XRF) instruments. Both techniques can be utilized in various industries including aviation, marine, railways, heavy diesel and other industrial machinery with, however, some substantial differences in application and instrument performance. In this work, the performances of a LIBS and an XRF instrument are compared based on measurements of a wide range of typical aerospace alloys including steels, titanium, aluminum and nickel alloys. Measurement results were analyzed with a staged correlation technique specifically developed for the purposes of this study - identifying the particle alloy composition using a pre-recorded library of spectral signatures. The analysis is performed in two stages: first, the base element of the alloy is determined by correlation with the stored elemental spectra and then, the alloy is identified by matching the particle's spectral signature using parametric correlation against the stored spectra of all alloys that have the same base element. The correlation analysis has achieved highly repeatable discrimination between alloys of similar composition. Portable LIBS demonstrates higher detection accuracy and better identification of alloys comprising lighter elements as compared to that of the portable XRF system, and reveals a significant reduction in the analysis time over XRF.
Epileptic seizure detection in EEG signal using machine learning techniques.
Jaiswal, Abeg Kumar; Banka, Haider
2018-03-01
Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
NASA Astrophysics Data System (ADS)
Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan
2010-08-01
Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.
An Analysis of a Digital Variant of the Trail Making Test Using Machine Learning Techniques
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
BACKGROUND The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. OBJECTIVE This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. METHODS Using digital Trail Making Test (dTMT) data collected from (N=54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. RESULTS Predicted TMT scores correlate well with clinical digital test scores (r=0.98) and paper time to completion scores (r=0.65). Predicted TICS exhibited a small correlation with clinically-derived TICS scores (r=0.12 Part A, r=0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically-derived FAB scores (r=0.13 Part A, r=0.29 for Part B). Digitally-derived features were also used to predict diagnosis (AUC of 0.65). CONCLUSION Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT’s additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone. PMID:27886019
NASA Astrophysics Data System (ADS)
Hirt, Christian; Rexer, Moritz; Claessens, Sten; Rummel, Reiner
2017-10-01
Comparisons between high-degree models of the Earth's topographic and gravitational potential may give insight into the quality and resolution of the source data sets, provide feedback on the modelling techniques and help to better understand the gravity field composition. Degree correlations (cross-correlation coefficients) or reduction rates (quantifying the amount of topographic signal contained in the gravitational potential) are indicators used in a number of contemporary studies. However, depending on the modelling techniques and underlying levels of approximation, the correlation at high degrees may vary significantly, as do the conclusions drawn. The present paper addresses this problem by attempting to provide a guide on global correlation measures with particular emphasis on approximation effects and variants of topographic potential modelling. We investigate and discuss the impact of different effects (e.g., truncation of series expansions of the topographic potential, mass compression, ellipsoidal versus spherical approximation, ellipsoidal harmonic coefficient versus spherical harmonic coefficient (SHC) representation) on correlation measures. Our study demonstrates that the correlation coefficients are realistic only when the model's harmonic coefficients of a given degree are largely independent of the coefficients of other degrees, permitting degree-wise evaluations. This is the case, e.g., when both models are represented in terms of SHCs and spherical approximation (i.e. spherical arrangement of field-generating masses). Alternatively, a representation in ellipsoidal harmonics can be combined with ellipsoidal approximation. The usual ellipsoidal approximation level (i.e. ellipsoidal mass arrangement) is shown to bias correlation coefficients when SHCs are used. Importantly, gravity models from the International Centre for Global Earth Models (ICGEM) are inherently based on this approximation level. A transformation is presented that enables a transformation of ICGEM geopotential models from ellipsoidal to spherical approximation. The transformation is applied to generate a spherical transform of EGM2008 (sphEGM2008) that can meaningfully be correlated degree-wise with the topographic potential. We exploit this new technique and compare a number of models of topographic potential constituents (e.g., potential implied by land topography, ocean water masses) based on the Earth2014 global relief model and a mass-layer forward modelling technique with sphEGM2008. Different to previous findings, our results show very significant short-scale correlation between Earth's gravitational potential and the potential generated by Earth's land topography (correlation +0.92, and 60% of EGM2008 signals are delivered through the forward modelling). Our tests reveal that the potential generated by Earth's oceans water masses is largely unrelated to the geopotential at short scales, suggesting that altimetry-derived gravity and/or bathymetric data sets are significantly underpowered at 5 arc-min scales. We further decompose the topographic potential into the Bouguer shell and terrain correction and show that they are responsible for about 20 and 25% of EGM2008 short-scale signals, respectively. As a general conclusion, the paper shows the importance of using compatible models in topographic/gravitational potential comparisons and recommends the use of SHCs together with spherical approximation or EHCs with ellipsoidal approximation in order to avoid biases in the correlation measures.
USDA-ARS?s Scientific Manuscript database
This study examined the sterol compositions of 102 dinoflagellates (including several previously unexamined species) using clustering techniques as a means of determining the relatedness of the organisms. In addition, dinoflagellate sterol-based relationships were compared statistically to dinoflag...
Creation of hybrid optoelectronic systems for document identification
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Voronyak, Taras I.; Kulynych, Yaroslav P.; Maksymenko, Olexander P.; Pogan, Ignat Y.
2001-06-01
Use of security devices based on a joint transform correlator (JTC) architecture for identification of credit cards and other products is very promising. The experimental demonstration of the random phase encoding technique for security verification shows that hybrid JTCs can be successfully utilized. The random phase encoding technique provides a very high protection level of products and things to be identified. However, the realization of this technique is connected with overcoming of the certain practical problems. To solve some of these problems and simultaneously to improve the security of documents and other products, we propose to use a transformed phase mask (TPM) as an input object in an optical correlator. This mask is synthesized from a random binary pattern (RBP), which is directly used to fabricate a reference phase mask (RPM). To obtain the TPM, we previously separate the RBP on a several parts (for example, K parts) of an arbitrary shape and further fabricate the TPM from this transformed RBP. The fabricated TPM can be bonded as the optical mark to any product or thing to be identified. If the RPM and the TPM are placed on the optical correlator input, the first diffracted order of the output correlation signal is containing the K narrow autocorrelation peaks. The distances between the peaks and the peak's intensities can be treated as the terms of the identification feature vector (FV) for the TPM identification.
Invariant domain watermarking using heaviside function of order alpha and fractional Gaussian field.
Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed
2015-01-01
Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness.
Invariant Domain Watermarking Using Heaviside Function of Order Alpha and Fractional Gaussian Field
Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed
2015-01-01
Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness. PMID:25884854
In vivo correlation mapping microscopy
NASA Astrophysics Data System (ADS)
McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh; Leahy, Martin
2016-04-01
To facilitate regular assessment of the microcirculation in vivo, noninvasive imaging techniques such as nailfold capillaroscopy are required in clinics. Recently, a correlation mapping technique has been applied to optical coherence tomography (OCT), which extends the capabilities of OCT to microcirculation morphology imaging. This technique, known as correlation mapping optical coherence tomography, has been shown to extract parameters, such as capillary density and vessel diameter, and key clinical markers associated with early changes in microvascular diseases. However, OCT has limited spatial resolution in both the transverse and depth directions. Here, we extend this correlation mapping technique to other microscopy modalities, including confocal microscopy, and take advantage of the higher spatial resolution offered by these modalities. The technique is achieved as a processing step on microscopy images and does not require any modification to the microscope hardware. Results are presented which show that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution in both the transverse and depth directions.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
NASA Astrophysics Data System (ADS)
Hammud, Hassan H.; Ghannoum, Amer; Masoud, Mamdouh S.
2006-02-01
Sixteen Schiff bases obtained from the condensation of benzaldehyde or salicylaldehyde with various amines (aniline, 4-carboxyaniline, phenylhydrazine, 2,4-dinitrophenylhydrazine, ethylenediamine, hydrazine, o-phenylenediamine and 2,6-pyridinediamine) are studied with UV-vis spectroscopy to observe the effect of solvents, substituents and other structural factors on the spectra. The bands involving different electronic transitions are interpreted. Computerized analysis and multiple regression techniques were applied to calculate the regression and correlation coefficients based on the equation that relates peak position λmax to the solvent parameters that depend on the H-bonding ability, refractive index and dielectric constant of solvents.
Road sign recognition using Viapix module and correlation
NASA Astrophysics Data System (ADS)
Ouerhani, Y.; Desthieux, M.; Alfalou, A.
2015-03-01
In this paper, we propose and validate a new system used to explore road assets. In this work we are interested on the vertical road signs. To do this, we are based on the combination of road signs detection, recognition and identification using data provides by sensors. The proposed approach consists on using panoramic views provided by the innovative device, VIAPIX®1, developed by our company ACTRIS2. We are based also on the optimized correlation technique for road signs recognition and identification on pictures. Obtained results shows the interest on using panoramic views compared to results obtained using images provided using only one camera.
Study of Vis/NIR spectroscopy measurement on acidity of yogurt
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Wu, Di; Li, Xiaoli
2006-09-01
A fast measurement of pH of yogurt using Vis/NIR-spectroscopy techniques was established in order to measuring the acidity of yogurt rapidly. 27 samples selected separately from five different brands of yogurt were measured by Vis/NIR-spectroscopy. The pH of yogurt on positions scanned by spectrum was measured by a pH meter. The mathematical model between pH and Vis/NIR spectral measurements was established and developed based on partial least squares (PLS) by using Unscramble V9.2. Then 25 unknown samples from 5 different brands were predicted based on the mathematical model. The result shows that The correlation coefficient of pH based on PLS model is more than 0.890, and standard error of calibration (SEC) is 0.037, standard error of prediction (SEP) is 0.043. Through predicting the pH of 25 samples of yogurt from 5 different brands, the correlation coefficient between predictive value and measured value of those samples is more than 0918. The results show the good to excellent prediction performances. The Vis/NIR spectroscopy technique had a significant greater accuracy for determining the value of pH. It was concluded that the VisINIRS measurement technique can be used to measure pH of yogurt fast and accurately, and a new method for the measurement of pH of yogurt was established.
Molloi, Sabee; Ding, Huanjun; Feig, Stephen
2015-01-01
Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229
Electrical test prediction using hybrid metrology and machine learning
NASA Astrophysics Data System (ADS)
Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti
2017-03-01
Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.
2013-05-01
Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
NASA Technical Reports Server (NTRS)
Ray, Terrill W.; Anderson, Don L.
1994-01-01
There is increasing use of statistical correlations between geophysical fields and between geochemical and geophysical fields in attempts to understand how the Earth works. Typically, such correlations have been based on spherical harmonic expansions. The expression of functions on the sphere as spherical harmonic series has many pitfalls, especially if the data are nonuniformly and/or sparsely sampled. Many of the difficulties involved in the use of spherical harmonic expansion techniques can be avoided through the use of spatial domain correlations, but this introduces other complications, such as the choice of a sampling lattice. Additionally, many geophysical and geochemical fields fail to satisfy the assumptions of standard statistical significance tests. This is especially problematic when the data values to be correlated with a geophysical field were collected at sample locations which themselves correlate with that field. This paper examines many correlations which have been claimed in the past between geochemistry and mantle tomography and between hotspot, ridge, and slab locations and tomography using both spherical harmonic coefficient correlations and spatial domain correlations. No conclusively significant correlations are found between isotopic geochemistry and mantle tomography. The Crough and Jurdy (short) hotspot location list shows statistically significant correlation with lowermost mantle tomography for degree 2 of the spherical harmonic expansion, but there are no statistically significant correlations in the spatial case. The Vogt (long) hotspot location list does not correlate with tomography anywhere in the mantle using either technique. Both hotspot lists show a strong correlation between hotspot locations and geoid highs when spatially correlated, but no correlations are revealed by spherical harmonic techniques. Ridge locations do not show any statistically significant correlations with tomography, slab locations, or the geoid; the strongest correlation is with lowermost mantle tomography, which is probably spurious. The most striking correlations are between mantle tomography and post-Pangean subducted slabs. The integrated locations of slabs correlate strongly with fast areas near the transition zone and the core-mantle boundary and with slow regions from 1022-1248 km depth. This seems to be consistent with the 'avalanching' downwellings which have been indicated by models of the mantle which include an endothermic phase transition at the 670-km discontinuity, although this is not a unique interpretation. Taken as a whole, these results suggest that slabs and associated cold downwellings are the dominant feature of mantle convection. Hotspot locations are no better correlated with lower mantle tomography than are ridge locations.
Parallel image logical operations using cross correlation
NASA Technical Reports Server (NTRS)
Strong, J. P., III
1972-01-01
Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.
Adapted all-numerical correlator for face recognition applications
NASA Astrophysics Data System (ADS)
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
2013-03-01
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
NASA Astrophysics Data System (ADS)
Chen, Jianbo; Wang, Yue; Rong, Lixin; Wang, Jingjuan
2018-07-01
IR, Raman and other separation-free and label-free spectroscopic techniques have been the promising methods for the rapid and low-cost quality control of complex mixtures such as food and herb. However, as the overlapped signals from different ingredients usually make it difficult to extract useful information, chemometrics tools are often needed to find out spectral features of interest. With designed perturbations, two-dimensional correlation spectroscopy (2DCOS) is a powerful technique to resolve the overlapped spectral bands and enhance the apparent spectral resolution. In this research, the integrative two-dimensional correlation spectroscopy (i2DCOS) is defined for the first time overcome some disadvantages of synchronous and asynchronous correlation spectra for identification. The integrative 2D correlation spectra weight the asynchronous cross peaks by the corresponding synchronous cross peaks, which combines the signal-to-noise ratio advantage of synchronous correlation spectra and the spectral resolution advantage of asynchronous correlation spectra. The feasibility of the integrative 2D correlation spectra for the quality control of complex mixtures is examined by the identification of adulterated Fritillariae Bulbus powders. Compared with model-based pattern recognition and multivariate calibration methods, i2DCOS can provide intuitive identification results but not require the number of samples. The results show the potential of i2DCOS in the intuitive quality control of herbs and other complex mixtures, especially when the number of samples is not large.
Improvement of the accuracy of noise measurements by the two-amplifier correlation method.
Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P
2013-10-01
We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.
CCFpams: Atmospheric stellar parameters from cross-correlation functions
NASA Astrophysics Data System (ADS)
Malavolta, Luca; Lovis, Christophe; Pepe, Francesco; Sneden, Christopher; Udry, Stephane
2017-07-01
CCFpams allows the measurement of stellar temperature, metallicity and gravity within a few seconds and in a completely automated fashion. Rather than performing comparisons with spectral libraries, the technique is based on the determination of several cross-correlation functions (CCFs) obtained by including spectral features with different sensitivity to the photospheric parameters. Literature stellar parameters of high signal-to-noise (SNR) and high-resolution HARPS spectra of FGK Main Sequence stars are used to calibrate the stellar parameters as a function of CCF areas.
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Current reduced-order thermal model for cryogenic propellant tanks is based on correlations built for flat plates collected in the 1950's. The use of these correlations suffers from: inaccurate geometry representation; inaccurate gravity orientation; ambiguous length scale; and lack of detailed validation. The work presented under this task uses the first-principles based Computational Fluid Dynamics (CFD) technique to compute heat transfer from tank wall to the cryogenic fluids, and extracts and correlates the equivalent heat transfer coefficient to support reduced-order thermal model. The CFD tool was first validated against available experimental data and commonly used correlations for natural convection along a vertically heated wall. Good agreements between the present prediction and experimental data have been found for flows in laminar as well turbulent regimes. The convective heat transfer between tank wall and cryogenic propellant, and that between tank wall and ullage gas were then simulated. The results showed that commonly used heat transfer correlations for either vertical or horizontal plate over predict heat transfer rate for the cryogenic tank, in some cases by as much as one order of magnitude. A characteristic length scale has been defined that can correlate all heat transfer coefficients for different fill levels into a single curve. This curve can be used for the reduced-order heat transfer model analysis.
[The links between neuropsychology and neurophysiology].
Stolarska-Weryńska, Urszula; Biedroń, Agnieszka; Kaciński, Marek
2016-01-01
The aim of the study was to establish current scope of knowledge regarding associations between neurophysiological functioning, neuropsychology and psychoterapy. A systematic review was performed including 93 publications from Science Server, which contains the collections of Elsevier, Springer Journals, SCI-Ex/ICM, MEDLINE/PubMed, and SCOPUS. The works have been selected basing on following key words: 'neuropsychology, neurocognitive correlates, electrodermal response, event related potential, EEG, pupillography, electromiography' out of papers published between 2004-2015. Present reports on the use of neurophysiological methods in psychology can be divided into two areas: experimental research and research of the practical use of conditioning techniques and biofeedback in the treatment of somatic disease. Among the experimental research the following have been distinguished: research based on the startle reflex, physiological reaction to novelty, stress, type/amount of cognitive load and physiological correlates of emotion; research on the neurophysiological correlates of mental disorders, mostly mood and anxiety disorders, and neurocognitive correlates: of memory, attention, learning and intelligence. Among papers regarding the use of neurophysiological methods in psychology two types are the most frequent: on the mechanisms of biofeedback, related mainly to neuro- feedback, which is a quickly expanding method of various attention and mental disorders'treatment, and also research of the use of conditioning techniques in the treatment of mental disorders, especially depression and anxiety. A special place among all the above is taken by the research on electrophysiological correlates of psychotherapy, aiming to differentiate between the efficacy of various psychotherapeutic schools (the largest amount of publications regard the efficacy of cognitive-behavioral psychotherapy) in patients of different age groups and different diagnosis.
Feature-based alert correlation in security systems using self organizing maps
NASA Astrophysics Data System (ADS)
Kumar, Munesh; Siddique, Shoaib; Noor, Humera
2009-04-01
The security of the networks has been an important concern for any organization. This is especially important for the defense sector as to get unauthorized access to the sensitive information of an organization has been the prime desire for cyber criminals. Many network security techniques like Firewall, VPN Concentrator etc. are deployed at the perimeter of network to deal with attack(s) that occur(s) from exterior of network. But any vulnerability that causes to penetrate the network's perimeter of defense, can exploit the entire network. To deal with such vulnerabilities a system has been evolved with the purpose of generating an alert for any malicious activity triggered against the network and its resources, termed as Intrusion Detection System (IDS). The traditional IDS have still some deficiencies like generating large number of alerts, containing both true and false one etc. By automatically classifying (correlating) various alerts, the high-level analysis of the security status of network can be identified and the job of network security administrator becomes much easier. In this paper we propose to utilize Self Organizing Maps (SOM); an Artificial Neural Network for correlating large amount of logged intrusion alerts based on generic features such as Source/Destination IP Addresses, Port No, Signature ID etc. The different ways in which alerts can be correlated by Artificial Intelligence techniques are also discussed. . We've shown that the strategy described in the paper improves the efficiency of IDS by better correlating the alerts, leading to reduced false positives and increased competence of network administrator.
Microprocessors as a tool in determining correlation between sferics and tornado genesis: an update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, D.R.
1980-09-01
Sferics - atmospheric electromagnetic radiation - can be directly correlated, it is believed, to the genesis of tornadoes and other severe weather. Sferics are generated by lightning and other atmospheric disturbances that are not yet entirely understood. The recording and analysis of the patterns in which sferics events occur, it is hoped, will lead to accurate real-time prediction of tornadoes and other severe weather. Collection of the tremendous amount of sferics data generated by one storm system becomes cumbersome when correlation between at least two stations is necessary for triangulation. Microprocessor-based computing systems have made the task of data collectionmore » and manipulation inexpensive and manageable. The original paper on this subject delivered at MAECON '78 dealt with hardware interfacing. Presented were hardware and software tradeoffs, as well as design and construction techniques to yield a cost effective system. This updated paper presents an overview of where the data comes from, how it is collected, and some current manipulation and interpretation techniques used.« less
Correlation based efficient face recognition and color change detection
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.
2013-01-01
Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.
Two-Way Gene Interaction From Microarray Data Based on Correlation Methods
Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh
2016-01-01
Background Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. Objectives The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. Materials and Methods In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman’s rank correlation coefficient and Blomqvist’s measure, and compared them with Pearson’s correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson’s correlation, Spearman’s rank correlation, and Blomqvist’s coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Results Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist’s coefficient was not confirmed by visual methods. Conclusions Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data. PMID:27621916
Processing techniques for correlation of LDA and thermocouple signals
NASA Astrophysics Data System (ADS)
Nina, M. N. R.; Pita, G. P. A.
1986-11-01
A technique was developed to enable the evaluation of the correlation between velocity and temperature, with laser Doppler anemometer (LDA) as the source of velocity signals and fine wire thermocouple as that of flow temperature. The discontinuous nature of LDA signals requires a special technique for correlation, in particular when few seeding particles are present in the flow. The thermocouple signal was analog compensated in frequency and the effect of the value of time constant on the velocity temperature correlation was studied.
Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.
Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur
2016-07-25
In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.
NASA Technical Reports Server (NTRS)
Hewes, C. R.; Bosshart, P. W.; Eversole, W. L.; Dewit, M.; Buss, D. D.
1976-01-01
Two CCD techniques were discussed for performing an N-point sampled data correlation between an input signal and an electronically programmable reference function. The design and experimental performance of an implementation of the direct time correlator utilizing two analog CCDs and MOS multipliers on a single IC were evaluated. The performance of a CCD implementation of the chirp z transform was described, and the design of a new CCD integrated circuit for performing correlation by multiplication in the frequency domain was presented. This chip provides a discrete Fourier transform (DFT) or inverse DFT, multipliers, and complete support circuitry for the CCD CZT. The two correlation techniques are compared.
Fahimian, Benjamin; Yu, Victoria; Horst, Kathleen; Xing, Lei; Hristov, Dimitre
2013-12-01
External beam radiation therapy (EBRT) provides a non-invasive treatment alternative for accelerated partial breast irradiation (APBI), however, limitations in achievable dose conformity of current EBRT techniques have been correlated to reported toxicity. To enhance the conformity of EBRT APBI, a technique for conventional LINACs is developed, which through combined motion of the couch, intensity modulated delivery, and a prone breast setup, enables wide-angular coronal arc irradiation of the ipsilateral breast without irradiating through the thorax and contralateral breast. A couch trajectory optimization technique was developed to determine the trajectories that concurrently avoid collision with the LINAC and maintain the target within the MLC apertures. Inverse treatment planning was performed along the derived trajectory. The technique was experimentally implemented by programming the Varian TrueBeam™ STx in Developer Mode. The dosimetric accuracy of the delivery was evaluated by ion chamber and film measurements in phantom. The resulting optimized trajectory was shown to be necessarily non-isocentric, and contain both translation and rotations of the couch. Film measurements resulted in 93% of the points in the measured two-dimensional dose maps passing the 3%/3mm Gamma criterion. Preliminary treatment plan comparison to 5-field 3D-conformal, IMRT, and VMAT demonstrated enhancement in conformity, and reduction of the normal tissue V50% and V100% parameters that have been correlated with EBRT toxicity. The feasibility of wide-angular intensity modulated partial breast irradiation using motion of the couch has been demonstrated experimentally on a standard LINAC for the first time. For patients eligible for a prone setup, the technique may enable improvement of dose conformity and associated dose-volume parameters correlated with toxicity. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Parween, Shahila; Nahar, Pradip
2013-10-15
In this communication, we report ELISA technique on an activated polypropylene microtest plate (APPµTP) as an illustrative example of a low cost diagnostic assay. Activated test zone in APPµTP binds a capture biomolecule through covalent linkage thereby, eliminating non-specific binding often prevalent in absorption based techniques. Efficacy of APPµTP is demonstrated by detecting human immunoglobulin G (IgG), human immunoglobulin E (IgE) and Aspergillus fumigatus antibody in patient's sera. Detection is done by taking the image of the assay solution by a desktop scanner and analyzing the color of the image. Human IgE quantification by color saturation in the image-based assay shows excellent correlation with absorbance-based assay (Pearson correlation coefficient, r=0.992). Significance of the relationship is seen from its p value which is 4.087e-11. Performance of APPµTP is also checked with respect to microtiter plate and paper-based ELISA. APPµTP can quantify an analyte as precisely as in microtiter plate with insignificant non-specific binding, a necessary prerequisite for ELISA assay. In contrast, paper-ELISA shows high non-specific binding in control sera (false positive). Finally, we have carried out ELISA steps on APPµTP by ultrasound waves on a sonicator bath and the results show that even in 8 min, it can convincingly differentiate a test sample from a control sample. In short, spectrophotometer-free image-based miniaturized ELISA on APPµTP is precise, reliable, rapid, and sensitive and could be a good substitute for conventional immunoassay procedures widely used in clinical and research laboratories. Copyright © 2013 Elsevier B.V. All rights reserved.
Techniques for measuring arrival times of pulsar signals 1: DSN observations from 1968 to 1980
NASA Technical Reports Server (NTRS)
Downs, G. S.; Reichley, P. E.
1980-01-01
Techniques used in the ground based observations of pulsars are described, many of them applicable in a navigation scheme. The arrival times of the pulses intercepting Earth are measured at time intervals from a few days to a few months. Low noise, wide band receivers, amplify signals intercepted by 26 m, 34, and 64 m antennas. Digital recordings of total received signal power versus time are cross correlated with the appropriate pulse template.
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
Quinone-based stable isotope probing for assessment of 13C substrate-utilizing bacteria
NASA Astrophysics Data System (ADS)
Kunihiro, Tadao; Katayama, Arata; Demachi, Toyoko; Veuger, Bart; Boschker, Henricus T. S.; van Oevelen, Dick
2015-04-01
In this study, we attempted to establish quinone-stable-isotope probing (SIP) technique to link substrate-utilizing bacterial group to chemotaxonomic group in bacterial community. To identify metabolically active bacterial group in various environments, SIP techniques combined with biomarkers have been widely utilized as an attractive method for environmental study. Quantitative approaches of the SIP technique have unique advantage to assess substrate-incorporation into bacteria. As a most major quantitative approach, SIP technique based on phospholipid-derived fatty acids (PLFA) have been applied to simultaneously assess substrate-incorporation rate into bacteria and microbial community structure. This approach is powerful to estimate the incorporation rate because of the high sensitivity due to the detection by a gas chromatograph-combustion interface-isotope ratio mass spectrometer (GC-c-IRMS). However, its phylogenetic resolution is limited by specificity of a compound-specific marker. We focused on respiratory quinone as a biomarker. Our previous study found a good correlation between concentrations of bacteria-specific PLFAs and quinones over several orders of magnitude in various marine sediments, and the quinone method has a higher resolution (bacterial phylum level) for resolving differences in bacterial community composition more than that of bacterial PLFA. Therefore, respiratory quinones are potentially good biomarkers for quantitative approaches of the SIP technique. The LC-APCI-MS method as molecular-mass based detection method for quinone was developed and provides useful structural information for identifying quinone molecular species in environmental samples. LC-MS/MS on hybrid triple quadrupole/linear ion trap, which enables to simultaneously identify and quantify compounds in a single analysis, can detect high molecular compounds with their isotope ions. Use of LC-MS/MS allows us to develop quinone-SIP based on molecular mass differences due to 13C abundance in the quinone. In this study, we verified carbon stable isotope of quinone compared with bulk carbon stable isotope of bacterial culture. Results indicated a good correlation between carbon stable isotope of quinone compared with bulk carbon stable isotope. However, our measurement conditions for detection of quinone isotope-ions incurred underestimation of 13C abundance in the quinone. The quinone-SIP technique needs further optimization for measurement conditions of LC-MS/MS.
Jain, Rahi; Venkatasubramanian, Padma
2014-01-01
Quality Ayurvedic herbal medicines are potential, low-cost solutions for addressing contemporary healthcare needs of both Indian and global community. Correlating Ayurvedic herbal preparations with modern processing principles (MPPs) can help develop new and use appropriate technology for scaling up production of the medicines, which is necessary to meet the growing demand. Understanding the fundamental Ayurvedic principles behind formulation and processing is also important for improving the dosage forms. Even though Ayurvedic industry has adopted technologies from food, chemical and pharmaceutical industries, there is no systematic study to correlate the traditional and modern processing methods. This study is an attempt to provide a possible correlation between the Ayurvedic processing methods and MPPs. A systematic literature review was performed to identify the Ayurvedic processing methods by collecting information from English editions of classical Ayurveda texts on medicine preparation methods. Correlation between traditional and MPPs was done based on the techniques used in Ayurvedic drug processing. It was observed that in Ayurvedic medicine preparations there were two major types of processes, namely extraction, and separation. Extraction uses membrane rupturing and solute diffusion principles, while separation uses volatility, adsorption, and size-exclusion principles. The study provides systematic documentation of methods used in Ayurveda for herbal drug preparation along with its interpretation in terms of MPPs. This is the first step which can enable improving or replacing traditional techniques. New technologies or use of existing technologies can be used to improve the dosage forms and scaling up while maintaining the Ayurvedic principles similar to traditional techniques.
NASA Astrophysics Data System (ADS)
Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.
2015-03-01
Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.
David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García
2017-01-06
Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.
Separations and characterizations of fractions from Mayan, Heavy Arabian, and Hondo crude oils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kircher, C.C.
1991-01-01
This paper reports on Mayan, Heavy Arabian, and Hondo crude oil resids separated with a modified, extended ASTM D2007 procedure. The fractions obtained have been characterized with various analytical techniques. Chemical properties, hydrodesulfurization, and hydrodemetallation activities of the resids have been correlated with the chemical properties of the separated fractions. Many correlations were indicative of the overall bulk properties of the resids and the broad chemical classes obtained from the separation schemes. Other correlations reflected the unique chemical nature of each crude oil resid. Some potentially important correlations were found between hydrodesulfurization activity and sulfur concentration in polars and asphaltenes,more » and between hydrodemetallation activity and nitrogen concentration in the acid and bases fractions.« less
Techniques for measurement of thoracoabdominal asynchrony
NASA Technical Reports Server (NTRS)
Prisk, G. Kim; Hammer, J.; Newth, Christopher J L.
2002-01-01
Respiratory motion measured by respiratory inductance plethysmography often deviates from the sinusoidal pattern assumed in the traditional Lissajous figure (loop) analysis used to determine thoraco-abdominal asynchrony, or phase angle phi. We investigated six different time-domain methods of measuring phi, using simulated data with sinusoidal and triangular waveforms, phase shifts of 0-135 degrees, and 10% noise. The techniques were then used on data from 11 lightly anesthetized rhesus monkeys (Macaca mulatta; 7.6 +/- 0.8 kg; 5.7 +/- 0.5 years old), instrumented with a respiratory inductive plethysmograph, and subjected to increasing levels of inspiratory resistive loading ranging from 5-1,000 cmH(2)O. L(-1). sec(-1).The best results were obtained from cross-correlation and maximum linear correlation, with errors less than approximately 5 degrees from the actual phase angle in the simulated data. The worst performance was produced by the loop analysis, which in some cases was in error by more than 30 degrees. Compared to correlation, other analysis techniques performed at an intermediate level. Maximum linear correlation and cross-correlation produced similar results on the data collected from monkeys (SD of the difference, 4.1 degrees ) but all other techniques had a high SD of the difference compared to the correlation techniques.We conclude that phase angles are best measured using cross-correlation or maximum linear correlation, techniques that are independent of waveform shape, and robust in the presence of noise. Copyright 2002 Wiley-Liss, Inc.
Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission
NASA Astrophysics Data System (ADS)
Hampton, Jesse Clay
The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.
Stanford automatic photogrammetry research
NASA Technical Reports Server (NTRS)
Quam, L. H.; Hannah, M. J.
1974-01-01
A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.
The Mathematical Analysis of Style: A Correlation-Based Approach.
ERIC Educational Resources Information Center
Oppenheim, Rosa
1988-01-01
Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…
Predicting the Emplacement of Improvised Explosive Devices: An Innovative Solution
ERIC Educational Resources Information Center
Lerner, Warren D.
2013-01-01
In this quantitative correlational study, simulated data were employed to examine artificial-intelligence techniques or, more specifically, artificial neural networks, as they relate to the location prediction of improvised explosive devices (IEDs). An ANN model was developed to predict IED placement, based upon terrain features and objects…
Dörlich, René M; Chen, Qing; Niklas Hedde, Per; Schuster, Vittoria; Hippler, Marc; Wesslowski, Janine; Davidson, Gary; Nienhaus, G Ulrich
2015-05-07
Cellular communication in multi-cellular organisms is mediated to a large extent by a multitude of cell-surface receptors that bind specific ligands. An in-depth understanding of cell signaling networks requires quantitative information on ligand-receptor interactions within living systems. In principle, fluorescence correlation spectroscopy (FCS) based methods can provide such data, but live-cell applications have proven extremely challenging. Here, we have developed an integrated dual-color dual-focus line-scanning fluorescence correlation spectroscopy (2c2f lsFCS) technique that greatly facilitates live-cell and tissue experiments. Absolute ligand and receptor concentrations and their diffusion coefficients within the cell membrane can be quantified without the need to perform additional calibration experiments. We also determine the concentration of ligands diffusing in the medium outside the cell within the same experiment by using a raster image correlation spectroscopy (RICS) based analysis. We have applied this robust technique to study the interactions of two Wnt antagonists, Dickkopf1 and Dickkopf2 (Dkk1/2), to their cognate receptor, low-density-lipoprotein-receptor related protein 6 (LRP6), in the plasma membrane of living HEK293T cells. We obtained significantly lower affinities than previously reported using in vitro studies, underscoring the need to measure such data on living cells or tissues.
Bilung, Lesley Maurice; Tahar, Ahmad Syatir; Yunos, Nur Emyliana; Apun, Kasing; Lim, Yvonne Ai-Lian; Nillian, Elexson; Hashim, Hashimatul Fatma
2017-01-01
Cryptosporidiosis and cyclosporiasis are caused by waterborne coccidian protozoan parasites of the genera Cryptosporidium and Cyclospora, respectively. This study was conducted to detect Cryptosporidium and Cyclospora oocysts from environmental water abstracted by drinking water treatment plants and recreational activities in Sarawak, Malaysia. Water samples (12 each) were collected from Sungai Sarawak Kanan in Bau and Sungai Sarawak Kiri in Batu Kitang, respectively. In addition, 6 water samples each were collected from Ranchan Recreational Park and UNIMAS Lake at Universiti Malaysia Sarawak, Kota Samarahan, respectively. Water physicochemical parameters were also recorded. All samples were concentrated by the iron sulfate flocculation method followed by the sucrose floatation technique. Cryptosporidium and Cyclospora were detected by modified Ziehl-Neelsen technique. Correlation of the parasites distribution with water physicochemical parameters was analysed using bivariate Pearson correlation. Based on the 24 total samples of environmental water abstracted by drinking water treatment plants, all the samples (24/24; 100%) were positive with Cryptosporidium , and only 2 samples (2/24; 8.33%) were positive with Cyclospora . Based on the 12 total samples of water for recreational activities, 4 samples (4/12; 33%) were positive with Cryptosporidium , while 2 samples (2/12; 17%) were positive with Cyclospora . Cryptosporidium oocysts were negatively correlated with dissolved oxygen (DO).
Tahar, Ahmad Syatir; Yunos, Nur Emyliana; Apun, Kasing; Nillian, Elexson; Hashim, Hashimatul Fatma
2017-01-01
Cryptosporidiosis and cyclosporiasis are caused by waterborne coccidian protozoan parasites of the genera Cryptosporidium and Cyclospora, respectively. This study was conducted to detect Cryptosporidium and Cyclospora oocysts from environmental water abstracted by drinking water treatment plants and recreational activities in Sarawak, Malaysia. Water samples (12 each) were collected from Sungai Sarawak Kanan in Bau and Sungai Sarawak Kiri in Batu Kitang, respectively. In addition, 6 water samples each were collected from Ranchan Recreational Park and UNIMAS Lake at Universiti Malaysia Sarawak, Kota Samarahan, respectively. Water physicochemical parameters were also recorded. All samples were concentrated by the iron sulfate flocculation method followed by the sucrose floatation technique. Cryptosporidium and Cyclospora were detected by modified Ziehl-Neelsen technique. Correlation of the parasites distribution with water physicochemical parameters was analysed using bivariate Pearson correlation. Based on the 24 total samples of environmental water abstracted by drinking water treatment plants, all the samples (24/24; 100%) were positive with Cryptosporidium, and only 2 samples (2/24; 8.33%) were positive with Cyclospora. Based on the 12 total samples of water for recreational activities, 4 samples (4/12; 33%) were positive with Cryptosporidium, while 2 samples (2/12; 17%) were positive with Cyclospora. Cryptosporidium oocysts were negatively correlated with dissolved oxygen (DO). PMID:29234679
Lange, Daniel; Helck, Andreas; Rominger, Axel; Crispin, Alexander; Meiser, Bruno; Werner, Jens; Fischereder, Michael; Stangl, Manfred; Habicht, Antje
2018-07-01
Renal function of potential living kidney donors is routinely assessed with scintigraphy. Kidney anatomy is evaluated by imaging techniques such as magnetic resonance imaging (MRI). We evaluated if a MRI-based renal volumetry is a good predictor of kidney function pre- and postdonation. We retrospectively analyzed the renal volume (RV) in a MRI of 100 living kidney donors. RV was correlated with the tubular excretion rate (TER) of MAG3-scintigraphy, a measured creatinine clearance (CrCl), and the estimated glomerular filtration rate (eGFR) by Cockcroft-Gault (CG), CKD-EPI, and modification of diet in renal disease (MDRD) formula pre- and postdonation during a follow-up of 3 years. RV correlated significantly with the TER (total: r = 0.6735, P < 0.0001). Correlation between RV and renal function was the highest for eGFR by CG (r = 0.5595, P < 0.0001), in comparison with CrCl, MDRD-GFR, and CKD-EPI-GFR predonation. RV significantly correlated with CG-GFR postdonation and predicted CG-GFR until 3 years after donation. MRI renal volumetry might be an alternative technique for the evaluation of split renal function and prediction of renal function postdonation in living kidney donors. © 2018 Steunstichting ESOT.
NASA Astrophysics Data System (ADS)
Wu, Ya-Ting; Wong, Wai-Ki; Leung, Shu-Hung; Zhu, Yue-Sheng
This paper presents the performance analysis of a De-correlated Modified Code Tracking Loop (D-MCTL) for synchronous direct-sequence code-division multiple-access (DS-CDMA) systems under multiuser environment. Previous studies have shown that the imbalance of multiple access interference (MAI) in the time lead and time lag portions of the signal causes tracking bias or instability problem in the traditional correlating tracking loop like delay lock loop (DLL) or modified code tracking loop (MCTL). In this paper, we exploit the de-correlating technique to combat the MAI at the on-time code position of the MCTL. Unlike applying the same technique to DLL which requires an extensive search algorithm to compensate the noise imbalance which may introduce small tracking bias under low signal-to-noise ratio (SNR), the proposed D-MCTL has much lower computational complexity and exhibits zero tracking bias for the whole range of SNR, regardless of the number of interfering users. Furthermore, performance analysis and simulations based on Gold codes show that the proposed scheme has better mean square tracking error, mean-time-to-lose-lock and near-far resistance than the other tracking schemes, including traditional DLL (T-DLL), traditional MCTL (T-MCTL) and modified de-correlated DLL (MD-DLL).
SC-GRAPPA: Self-constraint noniterative GRAPPA reconstruction with closed-form solution.
Ding, Yu; Xue, Hui; Ahmad, Rizwan; Ting, Samuel T; Simonetti, Orlando P
2012-12-01
Parallel MRI (pMRI) reconstruction techniques are commonly used to reduce scan time by undersampling the k-space data. GRAPPA, a k-space based pMRI technique, is widely used clinically because of its robustness. In GRAPPA, the missing k-space data are estimated by solving a set of linear equations; however, this set of equations does not take advantage of the correlations within the missing k-space data. All k-space data in a neighborhood acquired from a phased-array coil are correlated. The correlation can be estimated easily as a self-constraint condition, and formulated as an extra set of linear equations to improve the performance of GRAPPA. The authors propose a modified k-space based pMRI technique called self-constraint GRAPPA (SC-GRAPPA) which combines the linear equations of GRAPPA with these extra equations to solve for the missing k-space data. Since SC-GRAPPA utilizes a least-squares solution of the linear equations, it has a closed-form solution that does not require an iterative solver. The SC-GRAPPA equation was derived by incorporating GRAPPA as a prior estimate. SC-GRAPPA was tested in a uniform phantom and two normal volunteers. MR real-time cardiac cine images with acceleration rate 5 and 6 were reconstructed using GRAPPA and SC-GRAPPA. SC-GRAPPA showed a significantly lower artifact level, and a greater than 10% overall signal-to-noise ratio (SNR) gain over GRAPPA, with more significant SNR gain observed in low-SNR regions of the images. SC-GRAPPA offers improved pMRI reconstruction, and is expected to benefit clinical imaging applications in the future.
Status in calculating electronic excited states in transition metal oxides from first principles.
Bendavid, Leah Isseroff; Carter, Emily Ann
2014-01-01
Characterization of excitations in transition metal oxides is a crucial step in the development of these materials for photonic and optoelectronic applications. However, many transition metal oxides are considered to be strongly correlated materials, and their complex electronic structure is challenging to model with many established quantum mechanical techniques. We review state-of-the-art first-principles methods to calculate charged and neutral excited states in extended materials, and discuss their application to transition metal oxides. We briefly discuss developments in density functional theory (DFT) to calculate fundamental band gaps, and introduce time-dependent DFT, which can model neutral excitations. Charged excitations can be described within the framework of many-body perturbation theory based on Green's functions techniques, which predominantly employs the GW approximation to the self-energy to facilitate a feasible solution to the quasiparticle equations. We review the various implementations of the GW approximation and evaluate each approach in its calculation of fundamental band gaps of many transition metal oxides. We also briefly review the related Bethe-Salpeter equation (BSE), which introduces an electron-hole interaction between GW-derived quasiparticles to describe accurately neutral excitations. Embedded correlated wavefunction theory is another framework used to model localized neutral or charged excitations in extended materials. Here, the electronic structure of a small cluster is modeled within correlated wavefunction theory, while its coupling to its environment is represented by an embedding potential. We review a number of techniques to represent this background potential, including electrostatic representations and electron density-based methods, and evaluate their application to transition metal oxides.
Digital education and dynamic assessment of tongue diagnosis based on Mashup technique.
Tsai, Chin-Chuan; Lo, Yen-Cheng; Chiang, John Y; Sainbuyan, Natsagdorj
2017-01-24
To assess the digital education and dynamic assessment of tongue diagnosis based on Mashup technique (DEDATD) according to specifific user's answering pattern, and provide pertinent information tailored to user's specifific needs supplemented by the teaching materials constantly updated through the Mashup technique. Fifty-four undergraduate students were tested with DEDATD developed. The effificacy of the DEDATD was evaluated based on the pre- and post-test performance, with interleaving training sessions targeting on the weakness of the student under test. The t-test demonstrated that signifificant difference was reached in scores gained during pre- and post-test sessions, and positive correlation between scores gained and length of time spent on learning, while no signifificant differences between the gender and post-test score, and the years of students in school and the progress in score gained. DEDATD, coupled with Mashup technique, could provide updated materials fifiltered through diverse sources located across the network. The dynamic assessment could tailor each individual learner's needs to offer custom-made learning materials. DEDATD poses as a great improvement over the traditional teaching methods.
NASA Astrophysics Data System (ADS)
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; E Carson, Richard
2017-06-01
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; Carson, Richard E
2017-06-21
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18 F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
NASA Technical Reports Server (NTRS)
Clem, Michelle M.; Woike, Mark; Abdul-Aziz, Ali
2013-01-01
The Aeronautical Sciences Project under NASAs Fundamental Aeronautics Program is extremely interested in the development of fault detection technologies, such as optical surface measurements in the internal parts of a flow path, for in situ health monitoring of gas turbine engines. In situ health monitoring has the potential to detect flaws, i.e. cracks in key components, such as engine turbine disks, before the flaws lead to catastrophic failure. In the present study, a cross-correlation imaging technique is investigated in a proof-of-concept study as a possible optical technique to measure the radial growth and strain field on an already cracked sub-scale turbine engine disk under loaded conditions in the NASA Glenn Research Centers High Precision Rotordynamics Laboratory. The optical strain measurement technique under investigation offers potential fault detection using an applied background consisting of a high-contrast random speckle pattern and imaging the background under unloaded and loaded conditions with a CCD camera. Spinning the cracked disk at high speeds induces an external load, resulting in a radial growth of the disk of approximately 50.8-m in the flawed region and hence, a localized strain field. When imaging the cracked disk under static conditions, the disk will appear shifted. The resulting background displacements between the two images will then be measured using the two-dimensional cross-correlation algorithms implemented in standard Particle Image Velocimetry (PIV) software to track the disk growth, which facilitates calculation of the localized strain field. In order to develop and validate this optical strain measurement technique an initial proof-of-concept experiment is carried out in a controlled environment. Using PIV optimization principles and guidelines, three potential backgrounds, for future use on the rotating disk, are developed and investigated in the controlled experiment. A range of known shifts are induced on the backgrounds; reference and data images are acquired before and after the induced shift, respectively, and the images are processed using the cross- correlation algorithms in order to determine the background displacements. The effectiveness of each background at resolving the known shift is evaluated and discussed in order to choose to the most suitable background to be implemented onto a rotating disk in the Rotordynamics Lab. Although testing on the rotating disk has not yet been performed, the driving principles behind the development of the present optical technique are based upon critical aspects of the future experiment, such as the amount of expected radial growth, disk analysis, and experimental design and are therefore addressed in the paper.
Campos, C F F; Paiva, D D; Perazzo, H; Moreira, P S; Areco, L F F; Terra, C; Perez, R; Figueiredo, F A F
2014-03-01
Hepatic fibrosis staging is based on semiquantitative scores. Digital imaging analysis (DIA) appears more accurate because fibrosis is quantified in a continuous scale. However, high cost, lack of standardization and worldwide unavailability restrict its use in clinical practice. We developed an inexpensive and widely available DIA technique for fibrosis quantification in hepatitis C, and here, we evaluate its reproducibility and correlation with semiquantitative scores, and determine the fibrosis percentage associated with septal fibrosis and cirrhosis. 282 needle biopsies staged by Ishak and METAVIR scores were included. Images of trichrome-stained sections were captured and processed using Adobe(®) Photoshop(®) CS3 and Adobe(®) Bridge(®) softwares. The percentage of fibrosis (fibrosis index) was determined by the ratio between the fibrosis area and the total sample area, expressed in pixels calculated in an automated way. An excellent correlation between DIA fibrosis index and Ishak and METAVIR scores was observed (Spearman's r = 0.95 and 0.92; P < 0.001, respectively). Excellent intra-observer reproducibility was observed in a randomly chosen subset of 39 biopsies with an intraclass correlation index of 0.99 (95% CI, 0.95-0.99). The best cut-offs associated with septal fibrosis and cirrhosis were 6% (AUROC 0.97, 95% CI, 0.95-0.99) and 27% (AUROC 1.0, 95% CI, 0.99-1), respectively. This new DIA technique had high correlation with semiquantitative scores in hepatitis C. This method is reproducible, inexpensive and available worldwide allowing its use in clinical practice. The incorporation of DIA technique provides a more complete evaluation of fibrosis adding the quantification to architectural patterns. © 2013 John Wiley & Sons Ltd.
Chen, Y. M.; Lin, P.; He, Y.; He, J. Q.; Zhang, J.; Li, X. L.
2016-01-01
A novel strategy based on the near infrared hyperspectral imaging techniques and chemometrics were explored for fast quantifying the collision strength index of ethylene-vinyl acetate copolymer (EVAC) coverings on the fields. The reflectance spectral data of EVAC coverings was obtained by using the near infrared hyperspectral meter. The collision analysis equipment was employed to measure the collision intensity of EVAC materials. The preprocessing algorithms were firstly performed before the calibration. The algorithms of random frog and successive projection (SP) were applied to extracting the fingerprint wavebands. A correlation model between the significant spectral curves which reflected the cross-linking attributions of the inner organic molecules and the degree of collision strength was set up by taking advantage of the support vector machine regression (SVMR) approach. The SP-SVMR model attained the residual predictive deviation of 3.074, the square of percentage of correlation coefficient of 93.48% and 93.05% and the root mean square error of 1.963 and 2.091 for the calibration and validation sets, respectively, which exhibited the best forecast performance. The results indicated that the approaches of integrating the near infrared hyperspectral imaging techniques with the chemometrics could be utilized to rapidly determine the degree of collision strength of EVAC. PMID:26875544
Nurses' knowledge of inhaler technique in the inpatient hospital setting.
De Tratto, Katie; Gomez, Christy; Ryan, Catherine J; Bracken, Nina; Steffen, Alana; Corbridge, Susan J
2014-01-01
High rates of inhaler misuse in patients with chronic obstructive pulmonary disease and asthma contribute to hospital readmissions and increased healthcare cost. The purpose of this study was to examine inpatient staff nurses' self-perception of their knowledge of proper inhaler technique compared with demonstrated technique and frequency of providing patients with inhaler technique teaching during hospitalization and at discharge. A prospective, descriptive study. A 495-bed urban academic medical center in the Midwest United States. A convenience sample of 100 nurses working on inpatient medical units. Participants completed a 5-item, 4-point Likert-scale survey evaluating self-perception of inhaler technique knowledge, frequency of providing patient education, and responsibility for providing education. Participants demonstrated inhaler technique to the investigators using both a metered dose inhaler (MDI) and Diskus device inhaler, and performance was measured via a validated checklist. Overall misuse rates were high for both MDI and Diskus devices. There was poor correlation between perceived ability and investigator-measured performance of inhaler technique. Frequency of education during hospitalization and at discharge was related to measured level of performance for the Diskus device but not for the MDI. Nurses are a key component of patient education in the hospital; however, nursing staff lack adequate knowledge of inhaler technique. Identifying gaps in nursing knowledge regarding proper inhaler technique and patient education about proper inhaler technique is important to design interventions that may positively impact patient outcomes. Interventions could include one-on-one education, Web-based education, unit-based education, or hospital-wide competency-based education. All should include return demonstration of appropriate technique.
Operating Spin Echo in the Quantum Regime for an Atomic-Ensemble Quantum Memory
NASA Astrophysics Data System (ADS)
Rui, Jun; Jiang, Yan; Yang, Sheng-Jun; Zhao, Bo; Bao, Xiao-Hui; Pan, Jian-Wei
2015-09-01
Spin echo is a powerful technique to extend atomic or nuclear coherence times by overcoming the dephasing due to inhomogeneous broadenings. However, there are disputes about the feasibility of applying this technique to an ensemble-based quantum memory at the single-quanta level. In this experimental study, we find that noise due to imperfections of the rephasing pulses has both intense superradiant and weak isotropic parts. By properly arranging the beam directions and optimizing the pulse fidelities, we successfully manage to operate the spin echo technique in the quantum regime by observing nonclassical photon-photon correlations as well as the quantum behavior of retrieved photons. Our work for the first time demonstrates the feasibility of harnessing the spin echo method to extend the lifetime of ensemble-based quantum memories at the single-quanta level.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
Engineering solar cells based on correlative X-ray microscopy
Stuckelberger, Michael; West, Bradley; Nietzold, Tara; ...
2017-05-01
In situ and operando measurement techniques combined with nanoscale resolution have proven invaluable in multiple fields of study. We argue that evaluating device performance as well as material behavior by correlative X-ray microscopy with <100 nm resolution can radically change the approach for optimizing absorbers, interfaces and full devices in solar cell research. Here, we thoroughly discuss the measurement technique of X-ray beam induced current and point out fundamental differences between measurements of wafer-based silicon and thin-film solar cells. Based on reports of the last years, we showcase the potential that X-ray microscopy measurements have in combination with in situmore » and operando approaches throughout the solar cell lifecycle: from the growth of individual layers to the performance under operating conditions and degradation mechanisms. Enabled by new developments in synchrotron beamlines, the combination of high spatial resolution with high brilliance and a safe working distance allows for the insertion of measurement equipment that can pave the way for a new class of experiments. When applied to photovoltaics research, we highlight today’s opportunities and challenges in the field of nanoscale X-ray microscopy, and give an outlook on future developments.« less
A study of hierarchical structure on South China industrial electricity-consumption correlation
NASA Astrophysics Data System (ADS)
Yao, Can-Zhong; Lin, Ji-Nan; Liu, Xiao-Feng
2016-02-01
Based on industrial electricity-consumption data of five southern provinces of China from 2005 to 2013, we study the industrial correlation mechanism with MST (minimal spanning tree) and HT (hierarchical tree) models. First, we comparatively analyze the industrial electricity-consumption correlation structure in pre-crisis and after-crisis period using MST model and Bootstrap technique of statistical reliability test of links. Results exhibit that all industrial electricity-consumption trees of five southern provinces of China in pre-crisis and after-crisis time are in formation of chain, and the "center-periphery structure" of those chain-like trees is consistent with industrial specialization in classical industrial chain theory. Additionally, the industrial structure of some provinces is reorganized and transferred in pre-crisis and after-crisis time. Further, the comparative analysis with hierarchical tree and Bootstrap technique demonstrates that as for both observations of GD and overall NF, the industrial electricity-consumption correlation is non-significant clustered in pre-crisis period, whereas it turns significant clustered in after-crisis time. Therefore we propose that in perspective of electricity-consumption, their industrial structures are directed to optimized organization and global correlation. Finally, the analysis of distance of HTs verifies that industrial reorganization and development may strengthen market integration, coordination and correlation of industrial production. Except GZ, other four provinces have a shorter distance of industrial electricity-consumption correlation in after-crisis period, revealing a better performance of regional specialization and integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, S.; Labanca, I.; Rech, I.
2014-10-15
Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments.more » However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
2016-05-08
unlimited. 5 1. Introduction Several liquid -fuelled combustion systems, such as liquid propellant rocket engines and gas turbines...AFRL-AFOSR-JP-TR-2016-0084 Novel techniques for quantification of correlation between primary liquid jet breakup and downstream spray characteristics...to 17 Apr 2016 4. TITLE AND SUBTITLE Novel techniques for quantification of correlation between primary liquid jet breakup and downstream spray
2016-10-05
unlimited. 5 1. Introduction Several liquid -fuelled combustion systems, such as liquid propellant rocket engines and gas turbines...AFRL-AFOSR-JP-TR-2016-0084 Novel techniques for quantification of correlation between primary liquid jet breakup and downstream spray characteristics...to 17 Apr 2016 4. TITLE AND SUBTITLE Novel techniques for quantification of correlation between primary liquid jet breakup and downstream spray
Opto-electronic characterization of third-generation solar cells.
Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat
2018-01-01
We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.
Linewidth measurements of tunable diode lasers using heterodyne and etalon techniques
NASA Technical Reports Server (NTRS)
Reid, J.; Cassidy, D. T.; Menzies, R. T.
1982-01-01
Measurements of the linewidths of Pb-salt diode lasers operating in the 8- and 9-micron region are reported. The linewidths of the 9-micron lasers were determined by conventional heterodyne techniques, while for the 8-micron lasers a new technique based on a Fabry-Perot etalon was used. The new technique avoids the complexity and limited wavelength range of the heterodyne measurements and can be used for any tunable laser. The linewidths observed varied from 0.6 to more than 500-MHz FWHM. The linewidth was found to vary dramatically from device to device, to depend strongly on junction temperature and injection current, and to be correlated with vibrations caused by operation of a closed-cycle refrigerator.
Biostatistics Series Module 6: Correlation and Linear Regression.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.
Biostatistics Series Module 6: Correlation and Linear Regression
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175
NASA Technical Reports Server (NTRS)
Zimmerli, Gregory A.; Goldburg, Walter I.
2002-01-01
A novel technique for characterizing turbulent flows was developed and tested at the NASA Glenn Research Center. The work is being done in collaboration with the University of Pittsburgh, through a grant from the NASA Microgravity Fluid Physics Program. The technique we are using, Homodyne Correlation Spectroscopy (HCS), is a laser-light-scattering technique that measures the Doppler frequency shift of light scattered from microscopic particles in the fluid flow. Whereas Laser Doppler Velocimetry gives a local (single-point) measurement of the fluid velocity, the HCS technique measures correlations between fluid velocities at two separate points in the flow at the same instant of time. Velocity correlations in the flow field are of fundamental interest to turbulence researchers and are of practical importance in many engineering applications, such as aeronautics.
A method to classify schizophrenia using inter-task spatial correlations of functional brain images.
Michael, Andrew M; Calhoun, Vince D; Andreasen, Nancy C; Baum, Stefi A
2008-01-01
The clinical heterogeneity of schizophrenia (scz) and the overlap of self reported and observed symptoms with other mental disorders makes its diagnosis a difficult task. At present no laboratory-based or image-based diagnostic tool for scz exists and such tools are desired to support existing methods for more precise diagnosis. Functional magnetic resonance imaging (fMRI) is currently employed to identify and correlate cognitive processes related to scz and its symptoms. Fusion of multiple fMRI tasks that probe different cognitive processes may help to better understand hidden networks of this complex disorder. In this paper we utilize three different fMRI tasks and introduce an approach to classify subjects based on inter-task spatial correlations of brain activation. The technique was applied to groups of patients and controls and its validity was checked with the leave-one-out method. We show that the classification rate increases when information from multiple tasks are combined.
Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data
NASA Astrophysics Data System (ADS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masui, K. W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; Yadav, J.
2017-02-01
We present the first application of a new foreground removal pipeline to the current leading H I intensity mapping data set, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h-field data of the GBT observations previously presented in Mausui et al. and Switzer et al., covering about 41 deg2 at 0.6 < z < 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point-source contamination using an independent component analysis technique (FASTICA), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that FASTICA is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps are dominated by instrumental noise on small scales which FASTICA, as a conservative subtraction technique of non-Gaussian signals, cannot mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the singular value decomposition (SVD) method, and confirm that foreground subtraction with FASTICA is robust against 21 cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and FASTICA are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping data sets.
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1980-01-01
A technique for fitting a straight line to a collection of data points is given. The relationships between the slopes and correlation coefficients, and between the corresponding standard deviations and correlation coefficient are given.
Di Lascio, Nicole; Bruno, Rosa Maria; Stea, Francesco; Bianchini, Elisabetta; Gemignani, Vincenzo; Ghiadoni, Lorenzo; Faita, Francesco
2014-01-01
Carotid pulse wave velocity (PWV) is considered as a surrogate marker for carotid stiffness and its assessment is increasingly being used in clinical practice. However, at the moment, its estimation needs specific equipment and a moderate level of technical expertise; moreover, it is based on a mathematical model. The aim of this study was to validate a new system for non-invasive and model-free carotid PWV assessment based on accelerometric sensors by comparison with currently used techniques. Accelerometric PWV (accPWV) values were obtained in 97 volunteers free of cardiovascular disease (age 24-85 years) and compared with standard ultrasound-based carotid stiffness parameters, such as carotid PWV (cPWV), relative distension (relD) and distensibility coefficient (DC). Moreover, the comparison between accPWV measurements and carotid-femoral PWV (cfPWV) was performed. Accelerometric PWV evaluations showed a significant correlation with cPWV measurements (R = 0.67), relD values (R = 0.66) and DC assessments (R = 0.64). These values were also significantly correlated with cfPWV evaluations (R = 0.46). In addition, the first attempt success rate was equal to 76.8 %. The accelerometric system allows a simple and quick local carotid stiffness evaluation and the values obtained with this system are significantly correlated with known carotid stiffness biomarkers. Therefore, the presented device could provide a concrete opportunity for an easy carotid stiffness evaluation even in clinical practice.
NASA Astrophysics Data System (ADS)
Outerbridge, Gregory John, II
Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.
Respiration-induced movement correlation for synchronous noninvasive renal cancer surgery.
Abhilash, Rakkunedeth H; Chauhan, Sunita
2012-07-01
Noninvasive surgery (NIS), such as high-intensity focused ultrasound (HIFU)-based ablation or radiosurgery, is used for treating tumors and cancers in various parts of the body. The soft tissue targets (usually organs) deform and move as a result of physiological processes such as respiration. Moreover, other deformations induced during surgery by changes in patient position, changes in physical properties caused by repeated exposures and uncertainties resulting from cavitation also occur. In this paper, we present a correlation-based movement prediction technique to address respiration-induced movement of the urological organs while targeting through extracorporeal trans-abdominal route access. Among other organs, kidneys are worst affected during respiratory cycles, with significant three-dimensional displacements observed on the order of 20 mm. Remote access to renal targets such as renal carcinomas and cysts during noninvasive surgery, therefore, requires a tightly controlled real-time motion tracking and quantitative estimate for compensation routine to synchronize the energy source(s) for precise energy delivery to the intended regions. The correlation model finds a mapping between the movement patterns of external skin markers placed on the abdominal access window and the internal movement of the targeted kidney. The coarse estimate of position is then fine-tuned using the Adaptive Neuro-Fuzzy Inference System (ANFIS), thereby achieving a nonlinear mapping. The technical issues involved in this tracking scheme are threefold: the model must have sufficient accuracy in mapping the movement pattern; there must be an image-based tracking scheme to provide the organ position within allowable system latency; and the processing delay resulting from modeling and tracking must be within the achievable prediction horizon to accommodate the latency in the therapeutic delivery system. The concept was tested on ultrasound image sequences collected from 20 healthy volunteers. The results indicate that the modeling technique can be practically integrated into an image-guided noninvasive robotic surgical system with an indicative targeting accuracy of more than 94%. A comparative analysis showed the superiority of this technique over conventional linear mapping and modelfree blind search techniques.
NASA Astrophysics Data System (ADS)
Tinianov, Brandon D.; Nakagawa, Masami; Muñoz, David R.
2006-02-01
This article describes a novel technique for the measurement of the thermal conductivity of low-density (12-18kg/m3) fiberglass insulation and other related fibrous insulation materials using a noninvasive acoustic apparatus. The experimental method is an extension of earlier acoustic methods based upon the evaluation of the propagation constant from the acoustic pressure transfer function across the test material. To accomplish this, an analytical model is employed that describes the behavior of sound waves at the outlet of a baffled waveguide. The model accounts for the behavior of the mixed impedance interface introduced by the test material. Current results show that the technique is stable for a broad range of absorber thicknesses and densities. Experimental results obtained in the laboratory show excellent correlation between the thermal conductivity and both the real and imaginary components of the propagation constant. Correlation of calculated propagation constant magnitude versus measured thermal conductivity gave an R2 of 0.94 for the bulk density range (12-18kg/m3) typical for manufactured fiberglass batt materials. As an improvement to earlier acoustic techniques, measurement is now possible in noisy manufacturing environments with a moving test material. Given the promise of such highly correlated measurements in a robust method, the acoustic technique is well suited to continuously measure the thermal conductivity of the material during its production, replacing current expensive off-line methods. Test cycle time is reduced from hours to seconds.
New technique for real-time distortion-invariant multiobject recognition and classification
NASA Astrophysics Data System (ADS)
Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan
2001-04-01
A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.
Soncin, Rafael; Mezêncio, Bruno; Ferreira, Jacielle Carolina; Rodrigues, Sara Andrade; Huebner, Rudolf; Serrão, Julio Cerca; Szmuchrowski, Leszek
2017-06-01
The aim of this study was to propose a new force parameter, associated with swimmers' technique and performance. Twelve swimmers performed five repetitions of 25 m sprint crawl and a tethered swimming test with maximal effort. The parameters calculated were: the mean swimming velocity for crawl sprint, the mean propulsive force of the tethered swimming test as well as an oscillation parameter calculated from force fluctuation. The oscillation parameter evaluates the force variation around the mean force during the tethered test as a measure of swimming technique. Two parameters showed significant correlations with swimming velocity: the mean force during the tethered swimming (r = 0.85) and the product of the mean force square root and the oscillation (r = 0.86). However, the intercept coefficient was significantly different from zero only for the mean force, suggesting that although the correlation coefficient of the parameters was similar, part of the mean velocity magnitude that was not associated with the mean force was associated with the product of the mean force square root and the oscillation. Thus, force fluctuation during tethered swimming can be used as a quantitative index of swimmers' technique.
Detecting Spatial Patterns in Biological Array Experiments
ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.
2005-01-01
Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791
Roles of Engineering Correlations in Hypersonic Entry Boundary Layer Transition Prediction
NASA Technical Reports Server (NTRS)
Campbell, Charles H.; Anderson, Brian P.; King, Rudolph A.; Kegerise, Michael A.; Berry, Scott A.; Horvath, Thomas J.
2010-01-01
Efforts to design and operate hypersonic entry vehicles are constrained by many considerations that involve all aspects of an entry vehicle system. One of the more significant physical phenomenon that affect entry trajectory and thermal protection system design is the occurrence of boundary layer transition from a laminar to turbulent state. During the Space Shuttle Return To Flight activity following the loss of Columbia and her crew of seven, NASA's entry aerothermodynamics community implemented an engineering correlation based framework for the prediction of boundary layer transition on the Orbiter. The methodology for this implementation relies upon similar correlation techniques that have been is use for several decades. What makes the Orbiter boundary layer transition correlation implementation unique is that a statistically significant data set was acquired in multiple ground test facilities, flight data exists to assist in establishing a better correlation and the framework was founded upon state of the art chemical nonequilibrium Navier Stokes flow field simulations. Recent entry flight testing performed with the Orbiter Discovery now provides a means to validate this engineering correlation approach to higher confidence. These results only serve to reinforce the essential role that engineering correlations currently exercise in the design and operation of entry vehicles. The framework of information related to the Orbiter empirical boundary layer transition prediction capability will be utilized to establish a fresh perspective on this role, and to discuss the characteristics which are desirable in a next generation advancement. The details of the paper will review the experimental facilities and techniques that were utilized to perform the implementation of the Orbiter RTF BLT Vsn 2 prediction capability. Statistically significant results for multiple engineering correlations from a ground testing campaign will be reviewed in order to describe why only certain correlations were selected for complete implementation to support the Shuttle Program. Historical Orbiter flight data on early boundary layer transition due to protruding gap fillers will be described in relation to the selected empirical correlations. In addition, Orbiter entry flight testing results from the BLT Flight Experiment will be discussed in relation to these correlations. Applicability of such correlations to the entry design problem will be reviewed, and finally a perspective on the desirable characteristics for a next generation capability based on high fidelity physical models will be provided.
Myer, Gregory D.; Ford, Kevin R.; Khoury, Jane; Succop, Paul; Hewett, Timothy E.
2012-01-01
Background Prospective measures of high knee abduction moment (KAM) during landing identify female athletes at high risk for anterior cruciate ligament injury. Laboratory-based measurements demonstrate 90% accuracy in prediction of high KAM. Clinic-based prediction algorithms that employ correlates derived from laboratory-based measurements also demonstrate high accuracy for prediction of high KAM mechanics during landing. Hypotheses Clinic-based measures derived from highly predictive laboratory-based models are valid for the accurate prediction of high KAM status, and simultaneous measurements using laboratory-based and clinic-based techniques highly correlate. Study Design Cohort study (diagnosis); Level of evidence, 2. Methods One hundred female athletes (basketball, soccer, volleyball players) were tested using laboratory-based measures to confirm the validity of identified laboratory-based correlate variables to clinic-based measures included in a prediction algorithm to determine high KAM status. To analyze selected clinic-based surrogate predictors, another cohort of 20 female athletes was simultaneously tested with both clinic-based and laboratory-based measures. Results The prediction model (odds ratio: 95% confidence interval), derived from laboratory-based surrogates including (1) knee valgus motion (1.59: 1.17-2.16 cm), (2) knee flexion range of motion (0.94: 0.89°-1.00°), (3) body mass (0.98: 0.94-1.03 kg), (4) tibia length (1.55: 1.20-2.07 cm), and (5) quadriceps-to-hamstrings ratio (1.70: 0.48%-6.0%), predicted high KAM status with 84% sensitivity and 67% specificity (P < .001). Clinic-based techniques that used a calibrated physician’s scale, a standard measuring tape, standard camcorder, ImageJ software, and an isokinetic dynamometer showed high correlation (knee valgus motion, r = .87; knee flexion range of motion, r = .95; and tibia length, r = .98) to simultaneous laboratory-based measurements. Body mass and quadriceps-to-hamstrings ratio were included in both methodologies and therefore had r values of 1.0. Conclusion Clinically obtainable measures of increased knee valgus, knee flexion range of motion, body mass, tibia length, and quadriceps-to-hamstrings ratio predict high KAM status in female athletes with high sensitivity and specificity. Female athletes who demonstrate high KAM landing mechanics are at increased risk for anterior cruciate ligament injury and are more likely to benefit from neuromuscular training targeted to this risk factor. Use of the developed clinic-based assessment tool may facilitate high-risk athletes’ entry into appropriate interventions that will have greater potential to reduce their injury risk. PMID:20595554
A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing
NASA Astrophysics Data System (ADS)
Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz
2018-06-01
Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files
NASA Technical Reports Server (NTRS)
2005-01-01
A new all-electronic Particle Image Velocimetry technique that can efficiently map high speed gas flows has been developed in-house at the NASA Lewis Research Center. Particle Image Velocimetry is an optical technique for measuring the instantaneous two component velocity field across a planar region of a seeded flow field. A pulsed laser light sheet is used to illuminate the seed particles entrained in the flow field at two instances in time. One or more charged coupled device (CCD) cameras can be used to record the instantaneous positions of particles. Using the time between light sheet pulses and determining either the individual particle displacements or the average displacement of particles over a small subregion of the recorded image enables the calculation of the fluid velocity. Fuzzy logic minimizes the required operator intervention in identifying particles and computing velocity. Using two cameras that have the same view of the illumination plane yields two single exposure image frames. Two competing techniques that yield unambiguous velocity vector direction information have been widely used for reducing the single-exposure, multiple image frame data: (1) cross-correlation and (2) particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. For the correlation technique, the correlation peak corresponding to the average displacement of particles across the subregion must be identified. Noise on the images and particle dropout result in misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work, fuzzy logic techniques are used to determine the true correlation displacement peak even when it is not the maximum peak, hence maximizing the information recovery from the correlation operation, maintaining the number of independent measurements, and minimizing the number of spurious velocity vectors. Correlation peaks are correctly identified in both high and low seed density cases. The correlation velocity vector map can then be used as a guide for the particle-tracking operation. Again fuzzy logic techniques are used, this time to identify the correct particle image pairings between exposures to determine particle displacements, and thus the velocity. Combining these two techniques makes use of the higher spatial resolution available from the particle tracking. Particle tracking alone may not be possible in the high seed density images typically required for achieving good results from the correlation technique. This two-staged velocimetric technique can measure particle velocities with high spatial resolution over a broad range of seeding densities.
Heddam, Salim
2014-01-01
In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.
Tipu, Hamid Nawaz; Bashir, Muhammad Mukarram; Noman, Muhammad
2016-10-01
Serology and DNA techniques are employed for Human Leukocyte Antigen (HLA) typing in different transplant centers. Results may not always correlate well and may need retyping with different technique. All the patients (with aplastic anemia, thalassemia, and immunodeficiency) and their donors, requiring HLA typing for bone marrow transplant were enrolled in the study. Serological HLA typing was done by complement-dependent lymphocytotoxicity while DNA-based typing was done with sequence specific primers (SSP). Serology identified 167 HLA A and 165 HLA B antigens while SSP in same samples identified 181 HLA A and 184 HLA B alleles. A11 and B51 were the commonest antigens/alleles by both methods. There were a total of 21 misreads and 32 dropouts on serology, for both HLA A and B loci with HLA A32, B52 and B61 being the most ambiguous antigens. Inherent limitations of serological techniques warrant careful interpretation or use of DNA-based methods for resolution of ambiguous typing.
Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin
2017-11-01
Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
Lian, Yanyun; Song, Zhijian
2014-01-01
Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.
Digital speckle correlation for nondestructive testing of corrosion
NASA Astrophysics Data System (ADS)
Paiva, Raul D., Jr.; Soga, Diogo; Muramatsu, Mikiya; Hogert, Elsa N.; Landau, Monica R.; Ruiz Gale, Maria F.; Gaggioli, Nestor G.
1999-07-01
This paper describes the use of optical correlation speckle patterns to detect and analyze the metallic corrosion phenomena, and shows the experimental set-up used. We present some new results in the characterization of the corrosion process using a model based in electroerosion phenomena. We also provide valuable information about surface microrelief changes, which is also useful in numerous engineering applications. The results obtained are good enough for showing that our technique is very useful for giving new possibilities to the analysis of the corrosion and oxidation process, particularly in real time.
Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells
NASA Astrophysics Data System (ADS)
Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke
2011-06-01
Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.
Eye Movement Correlates of Acquired Central Dyslexia
ERIC Educational Resources Information Center
Schattka, Kerstin I.; Radach, Ralph; Huber, Walter
2010-01-01
Based on recent progress in theory and measurement techniques, the analysis of eye movements has become one of the major methodological tools in experimental reading research. Our work uses this approach to advance the understanding of impaired information processing in acquired central dyslexia of stroke patients with aphasia. Up to now there has…
NASA Astrophysics Data System (ADS)
Jing, Ya-Bing; Liu, Chang-Wen; Bi, Feng-Rong; Bi, Xiao-Yang; Wang, Xia; Shao, Kang
2017-07-01
Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying features. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastICA-SVM achieves higher classification accuracy and makes better generalization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastICA-SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of feature extraction and the fault diagnosis of diesel engines.
Fluorescence correlation spectroscopy: principles and applications.
Bacia, Kirsten; Haustein, Elke; Schwille, Petra
2014-07-01
Fluorescence correlation spectroscopy (FCS) is used to study the movements and the interactions of biomolecules at extremely dilute concentrations, yielding results with good spatial and temporal resolutions. Using a number of technical developments, FCS has become a versatile technique that can be used to study a variety of sample types and can be advantageously combined with other methods. Unlike other fluorescence-based techniques, the analysis of FCS data is not based on the average intensity of the fluorescence emission but examines the minute intensity fluctuations caused by spontaneous deviations from the mean at thermal equilibrium. These fluctuations can result from variations in local concentrations owing to molecular mobility or from characteristic intermolecular or intramolecular reactions of fluorescently labeled biomolecules present at low concentrations. Here, we provide a basic introduction to FCS, including its technical development and theoretical basis, experimental setup of an FCS system, adjustment of a setup, data acquisition, and analysis of FCS measurements. Finally, the application of FCS to the study of lipid bilayer membranes and to living cells is discussed. © 2014 Cold Spring Harbor Laboratory Press.
Predicting chroma from luma with frequency domain intra prediction
NASA Astrophysics Data System (ADS)
Egge, Nathan E.; Valin, Jean-Marc
2015-03-01
This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.
Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi
2018-01-01
The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.
Design of infrasound-detection system via adaptive LMSTDE algorithm
NASA Technical Reports Server (NTRS)
Khalaf, C. S.; Stoughton, J. W.
1984-01-01
A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.
Acoustic Location of Lightning Using Interferometric Techniques
NASA Astrophysics Data System (ADS)
Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.
2013-12-01
Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.
On using the Hilbert transform for blind identification of complex modes: A practical approach
NASA Astrophysics Data System (ADS)
Antunes, Jose; Debut, Vincent; Piteau, Pilippe; Delaune, Xavier; Borsoi, Laurent
2018-01-01
The modal identification of dynamical systems under operational conditions, when subjected to wide-band unmeasured excitations, is today a viable alternative to more traditional modal identification approaches based on processing sets of measured FRFs or impulse responses. Among current techniques for performing operational modal identification, the so-called blind identification methods are the subject of considerable investigation. In particular, the SOBI (Second-Order Blind Identification) method was found to be quite efficient. SOBI was originally developed for systems with normal modes. To address systems with complex modes, various extension approaches have been proposed, in particular: (a) Using a first-order state-space formulation for the system dynamics; (b) Building complex analytic signals from the measured responses using the Hilbert transform. In this paper we further explore the latter option, which is conceptually interesting while preserving the model order and size. Focus is on applicability of the SOBI technique for extracting the modal responses from analytic signals built from a set of vibratory responses. The novelty of this work is to propose a straightforward computational procedure for obtaining the complex cross-correlation response matrix to be used for the modal identification procedure. After clarifying subtle aspects of the general theoretical framework, we demonstrate that the correlation matrix of the analytic responses can be computed through a Hilbert transform of the real correlation matrix, so that the actual time-domain responses are no longer required for modal identification purposes. The numerical validation of the proposed technique is presented based on time-domain simulations of a conceptual physical multi-modal system, designed to display modes ranging from normal to highly complex, while keeping modal damping low and nearly independent of the modal complexity, and which can prove very interesting in test bench applications. Numerical results for complex modal identifications are presented, and the quality of the identified modal matrix and modal responses, extracted using the complex SOBI technique and implementing the proposed formulation, is assessed.
De Crop, An; Bacher, Klaus; Van Hoof, Tom; Smeets, Peter V; Smet, Barbara S; Vergauwen, Merel; Kiendys, Urszula; Duyck, Philippe; Verstraete, Koenraad; D'Herde, Katharina; Thierens, Hubert
2012-01-01
To determine the correlation between the clinical and physical image quality of chest images by using cadavers embalmed with the Thiel technique and a contrast-detail phantom. The use of human cadavers fulfilled the requirements of the institutional ethics committee. Clinical image quality was assessed by using three human cadavers embalmed with the Thiel technique, which results in excellent preservation of the flexibility and plasticity of organs and tissues. As a result, lungs can be inflated during image acquisition to simulate the pulmonary anatomy seen on a chest radiograph. Both contrast-detail phantom images and chest images of the Thiel-embalmed bodies were acquired with an amorphous silicon flat-panel detector. Tube voltage (70, 81, 90, 100, 113, 125 kVp), copper filtration (0.1, 0.2, 0.3 mm Cu), and exposure settings (200, 280, 400, 560, 800 speed class) were altered to simulate different quality levels. Four experienced radiologists assessed the image quality by using a visual grading analysis (VGA) technique based on European Quality Criteria for Chest Radiology. The phantom images were scored manually and automatically with use of dedicated software, both resulting in an inverse image quality figure (IQF). Spearman rank correlations between inverse IQFs and VGA scores were calculated. A statistically significant correlation (r = 0.80, P < .01) was observed between the VGA scores and the manually obtained inverse IQFs. Comparison of the VGA scores and the automated evaluated phantom images showed an even better correlation (r = 0.92, P < .001). The results support the value of contrast-detail phantom analysis for evaluating clinical image quality in chest radiography. © RSNA, 2011.
Almessiere, M A; Altuwiriqi, R; Gondal, M A; AlDakheel, R K; Alotaibi, H F
2018-08-01
In this work, we analysed human fingernails of people who suffer from vitamin D deficiency using the laser-induced breakdown spectroscopy(LIBS) and inductively coupled plasma atomic emission spectroscopy (ICP-AES)techniques. The measurements have been conducted on 71 nail samples collected randomly from volunteers of different genders and ages ranged between 20 and 50 years. The main aim of this study is to find the correlation between vitamin D deficiency and the intensity of some dominated lines in the LIBS spectra. A LIBS spectrum consists of dominant lines of fifteen elements including calcium, magnesium, sodium, potassium, titanium, iron, chloride, sulphur, copper, chromium, zinc, nitrogen, phosphor, and oxygen. By recording the spectrum in specific ranges and focusing on calcium, magnesium, sodium, and potassium, we found a correlation between the intensity of the potassium (K) lines at (766.5 and 769.9 nm)and vitamin D level in both age groups (20 and 25 years old), with weak correlation for the calcium (Ca), magnesium (Mg), and sodium (Na) lines. To verify the validity of the LIBS results, we analysed the nail samples with ICP, a standard analytical technique. The elements detected with our LIBS technique are in a good agreement with those identified by ICP-AES. From the health and physiological perspectives, the LIBS system, which is used for spectral analysis in this work, is appropriate for diagnostic purposes such as to find the correlation between vitamin D deficiency and potassium content, especially for hypertensive patients who simultaneously take potassium-based medication and vitamin D supplement. Copyright © 2018 Elsevier B.V. All rights reserved.
Method of predicting the mean lung dose based on a patient's anatomy and dose-volume histograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zawadzka, Anna, E-mail: a.zawadzka@zfm.coi.pl; Nesteruk, Marta; Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich
The aim of this study was to propose a method to predict the minimum achievable mean lung dose (MLD) and corresponding dosimetric parameters for organs-at-risk (OAR) based on individual patient anatomy. For each patient, the dose for 36 equidistant individual multileaf collimator shaped fields in the treatment planning system (TPS) was calculated. Based on these dose matrices, the MLD for each patient was predicted by the homemade DosePredictor software in which the solution of linear equations was implemented. The software prediction results were validated based on 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans previously prepared formore » 16 patients with stage III non–small-cell lung cancer (NSCLC). For each patient, dosimetric parameters derived from plans and the results calculated by DosePredictor were compared. The MLD, the maximum dose to the spinal cord (D{sub max} {sub cord}) and the mean esophageal dose (MED) were analyzed. There was a strong correlation between the MLD calculated by the DosePredictor and those obtained in treatment plans regardless of the technique used. The correlation coefficient was 0.96 for both 3D-CRT and VMAT techniques. In a similar manner, MED correlations of 0.98 and 0.96 were obtained for 3D-CRT and VMAT plans, respectively. The maximum dose to the spinal cord was not predicted very well. The correlation coefficient was 0.30 and 0.61 for 3D-CRT and VMAT, respectively. The presented method allows us to predict the minimum MLD and corresponding dosimetric parameters to OARs without the necessity of plan preparation. The method can serve as a guide during the treatment planning process, for example, as initial constraints in VMAT optimization. It allows the probability of lung pneumonitis to be predicted.« less
Novel Tool for Complete Digitization of Paper Electrocardiography Data.
Ravichandran, Lakshminarayan; Harless, Chris; Shah, Amit J; Wick, Carson A; Mcclellan, James H; Tridandapani, Srini
We present a Matlab-based tool to convert electrocardiography (ECG) information from paper charts into digital ECG signals. The tool can be used for long-term retrospective studies of cardiac patients to study the evolving features with prognostic value. To perform the conversion, we: 1) detect the graphical grid on ECG charts using grayscale thresholding; 2) digitize the ECG signal based on its contour using a column-wise pixel scan; and 3) use template-based optical character recognition to extract patient demographic information from the paper ECG in order to interface the data with the patients' medical record. To validate the digitization technique: 1) correlation between the digital signals and signals digitized from paper ECG are performed and 2) clinically significant ECG parameters are measured and compared from both the paper-based ECG signals and the digitized ECG. The validation demonstrates a correlation value of 0.85-0.9 between the digital ECG signal and the signal digitized from the paper ECG. There is a high correlation in the clinical parameters between the ECG information from the paper charts and digitized signal, with intra-observer and inter-observer correlations of 0.8-0.9 (p < 0.05), and kappa statistics ranging from 0.85 (inter-observer) to 1.00 (intra-observer). The important features of the ECG signal, especially the QRST complex and the associated intervals, are preserved by obtaining the contour from the paper ECG. The differences between the measures of clinically important features extracted from the original signal and the reconstructed signal are insignificant, thus highlighting the accuracy of this technique. Using this type of ECG digitization tool to carry out retrospective studies on large databases, which rely on paper ECG records, studies of emerging ECG features can be performed. In addition, this tool can be used to potentially integrate digitized ECG information with digital ECG analysis programs and with the patient's electronic medical record.
NASA Astrophysics Data System (ADS)
Abdolahad
2015-01-01
Cancerous transformation may be dependent on correlation between electrical disruptions in the cell membrane and mechanical disruptions of cytoskeleton structures. Silicon nanotube (SiNT)-based electrical probes, as ultra-accurate signal recorders with subcellular resolution, may create many opportunities for fundamental biological research and biomedical applications. Here, we used this technology to electrically monitor cellular mechanosensing. The SiNT probe was combined with an electrically activated glass micropipette aspiration system to achieve a new cancer diagnostic technique that is based on real-time correlation between mechanical and electrical behaviour of single cells. Our studies demonstrated marked changes in the electrical response following increases in the mechanical aspiration force in healthy cells. In contrast, such responses were extremely weak for malignant cells. Confocal microscopy results showed the impact of actin microfilament remodelling on the reduction of the electrical response for aspirated cancer cells due to the significant role of actin in modulating the ion channel activity in the cell membrane.Cancerous transformation may be dependent on correlation between electrical disruptions in the cell membrane and mechanical disruptions of cytoskeleton structures. Silicon nanotube (SiNT)-based electrical probes, as ultra-accurate signal recorders with subcellular resolution, may create many opportunities for fundamental biological research and biomedical applications. Here, we used this technology to electrically monitor cellular mechanosensing. The SiNT probe was combined with an electrically activated glass micropipette aspiration system to achieve a new cancer diagnostic technique that is based on real-time correlation between mechanical and electrical behaviour of single cells. Our studies demonstrated marked changes in the electrical response following increases in the mechanical aspiration force in healthy cells. In contrast, such responses were extremely weak for malignant cells. Confocal microscopy results showed the impact of actin microfilament remodelling on the reduction of the electrical response for aspirated cancer cells due to the significant role of actin in modulating the ion channel activity in the cell membrane. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06102k
NASA Astrophysics Data System (ADS)
Gómez-García, C.; Brenguier, F.; Boué, P.; Shapiro, N. M.; Droznin, D. V.; Droznina, S. Ya; Senyukov, S. L.; Gordeev, E. I.
2018-05-01
Continuous noise-based monitoring of seismic velocity changes provides insights into volcanic unrest, earthquake mechanisms and fluid injection in the sub-surface. The standard monitoring approach relies on measuring travel time changes of late coda arrivals between daily and reference noise cross-correlations, usually chosen as stacks of daily cross-correlations. The main assumption of this method is that the shape of the noise correlations does not change over time or, in other terms, that the ambient-noise sources are stationary through time. These conditions are not fulfilled when a strong episodic source of noise, such as a volcanic tremor for example, perturbs the reconstructed Green's function. In this paper we propose a general formulation for retrieving continuous time series of noise-based seismic velocity changes without the requirement of any arbitrary reference cross-correlation function. Instead, we measure the changes between all possible pairs of daily cross-correlations and invert them using different smoothing parameters to obtain the final velocity change curve. We perform synthetic tests in order to establish a general framework for future applications of this technique. In particular, we study the reliability of velocity change measurements versus the stability of noise cross-correlation functions. We apply this approach to a complex dataset of noise cross-correlations at Klyuchevskoy volcanic group (Kamchatka), hampered by loss of data and the presence of highly non-stationary seismic tremors.
Further Progress in Noise Source Identification in High Speed Jets via Causality Principle
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.; Elam, K. A.
2004-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p/, measured at far field locations, were correlated with each of density p, axial velocity u, radial velocity v, puu and pvv fluctuations measured from various points in fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering (Seasholtz, Panda, and Elam, AIAA Paper 2002-0827). The technique uses a continuous wave, narrow line-width laser, Fabry-Perot interferometer and photon counting electronics. The far field sound pressure fluctuations at 30 to the jet axis provided the highest correlation coefficients with all flow variables. The correlation coefficients decreased sharply with increased microphone polar angle, and beyond about 60 all correlation mostly fell below the experimental noise floor. Among all correlations < puu; p/> showed the highest values. Interestingly,
, in all respects, were very similar toStructure-Based Characterization of Multiprotein Complexes
Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J.
2014-01-01
Summary Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. PMID:24954616
NASA Astrophysics Data System (ADS)
Ideris, N.; Ahmad, A. L.; Ooi, B. S.; Low, S. C.
2018-05-01
Microporous PVDF membranes were used as protein capture matrices in immunoassays. Because the most common labels in immunoassays were detected based on the colour change, an understanding of how protein concentration varies on different PVDF surfaces was needed. Herein, the correlation between the membrane pore size and protein adsorption was systematically investigated. Five different PVDF membrane morphologies were prepared and FTIR/ATR was employed to accurately quantify the surface protein concentration on membranes with small pore sizes. SigmaPlot® was used to find a suitable curve fit for protein adsorption and membrane pore size, with a high correlation coefficient, R2, of 0.9971.
Extracting the sovereigns’ CDS market hierarchy: A correlation-filtering approach
NASA Astrophysics Data System (ADS)
León, Carlos; Leiton, Karen; Pérez, Jhonatan
2014-12-01
This paper employs correlation-into-distance mapping techniques and a minimal spanning tree-based correlation-filtering methodology on 36 sovereign CDS spread time-series in order to identify the sovereigns’ informational hierarchy. The resulting hierarchy (i) concurs with sovereigns’ eigenvector centrality; (ii) confirms the importance of geographical and credit rating clustering; (iii) identifies Russia, Turkey and Brazil as regional benchmarks; (iv) reveals the idiosyncratic nature of Japan and United States; (v) confirms that a small set of common factors affects the system; (vi) suggests that lower-medium grade rated sovereigns are the most influential, but also the most prone to contagion; and (vii) suggests the existence of a “Latin American common factor”.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Quantum-optical coherence tomography with classical light.
Lavoie, J; Kaltenbaek, R; Resch, K J
2009-03-02
Quantum-optical coherence tomography (Q-OCT) is an interferometric technique for axial imaging offering several advantages over conventional methods. Chirped-pulse interferometry (CPI) was recently demonstrated to exhibit all of the benefits of the quantum interferometer upon which Q-OCT is based. Here we use CPI to measure axial interferograms to profile a sample accruing the important benefits of Q-OCT, including automatic dispersion cancellation, but with 10 million times higher signal. Our technique solves the artifact problem in Q-OCT and highlights the power of classical correlation in optical imaging.
López, Lissett; Venteo, Angel; Aguirre, Enara; García, Marga; Rodríguez, Majosé; Amusátegui, Inmaculada; Tesouro, Miguel A; Vela, Carmen; Sainz, Angel; Rueda, Paloma
2007-11-01
An indirect enzyme-linked immunosorbent assay (ELISA) based on baculovirus recombinant P30 protein of Ehrlichia canis and the 1BH4 anticanine IgG monoclonal antibody was developed and evaluated by examining a panel of 98 positive and 157 negative sera using the indirect fluorescent antibody (IFA) test as the reference technique. The P30-based ELISA appeared to be sensitive and specific (77.55% and 95.54%, respectively) when qualitative results (positive/negative) were compared with those of the IFA test; the coefficient of correlation (R) between the 2 tests was 0.833. Furthermore, it was possible to establish a mathematical formula for use in comparing the results of both techniques. These results indicate that recombinant P30 antigen-based ELISA is a suitable alternative of the IFA test for simple, consistent, and rapid serodiagnosis of canine ehrlichiosis. Moreover, the use of this recombinant protein as antigen offers a great advantage for antigen preparation in comparison with other techniques in which the whole E. canis organism is used as antigen.
Lambrechts, T; Papantoniou, I; Sonnaert, M; Schrooten, J; Aerts, J-M
2014-10-01
Online and non-invasive quantification of critical tissue engineering (TE) construct quality attributes in TE bioreactors is indispensable for the cost-effective up-scaling and automation of cellular construct manufacturing. However, appropriate monitoring techniques for cellular constructs in bioreactors are still lacking. This study presents a generic and robust approach to determine cell number and metabolic activity of cell-based TE constructs in perfusion bioreactors based on single oxygen sensor data in dynamic perfusion conditions. A data-based mechanistic modeling technique was used that is able to correlate the number of cells within the scaffold (R(2) = 0.80) and the metabolic activity of the cells (R(2) = 0.82) to the dynamics of the oxygen response to step changes in the perfusion rate. This generic non-destructive measurement technique is effective for a large range of cells, from as low as 1.0 × 10(5) cells to potentially multiple millions of cells, and can open-up new possibilities for effective bioprocess monitoring. © 2014 Wiley Periodicals, Inc.
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activationmore » analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.« less
NASA Astrophysics Data System (ADS)
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.; Tuskan, Gerald A.; Gunter, Lee E.; Engle, Nancy L.; Wymore, Ann M.; Weston, David J.
2017-12-01
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activation analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.
Martin, Madhavi Z.; Glasgow, David C.; Tschaplinski, Timothy J.; ...
2017-10-17
The black cottonwood poplar (Populus trichocarpa) leaf ionome (inorganic trace elements and mineral nutrients) is an important aspect for determining the physiological and developmental processes contributing to biomass production. A number of techniques are used to measure the ionome, yet characterizing the leaf spatial heterogeneity remains a challenge, especially in solid samples. Laser-induced breakdown spectroscopy (LIBS) has been used to determine the elemental composition of leaves and is able to raster across solid matrixes at 10 μm resolution. Here, we evaluate the use of LIBS for solid sample leaf elemental characterization in relation to neutron activation. In fact, neutron activationmore » analysis is a laboratory-based technique which is used by the National Institute of Standards and Technology (NIST) to certify trace elements in candidate reference materials including plant leaf matrices. Introduction to the techniques used in this research has been presented in this manuscript. Neutron activation analysis (NAA) data has been correlated to the LIBS spectra to achieve quantification of the elements or ions present within poplar leaves. The regression coefficients of calibration and validation using multivariate analysis (MVA) methodology for six out of seven elements have been determined and vary between 0.810 and 0.998. LIBS and NAA data has been presented for the elements such as, calcium, magnesium, manganese, aluminum, copper, and potassium. Chlorine was also detected but it did not show good correlation between the LIBS and NAA techniques. This research shows that LIBS can be used as a fast, high-spatial resolution technique to quantify elements as part of large-scale field phenotyping projects.« less
A New Technique to Observe ENSO Activity via Ground-Based GPS Receivers
NASA Astrophysics Data System (ADS)
Suparta, Wayan; Iskandar, Ahmad; Singh, Mandeep Singh Jit
In an attempt to study the effects of global climate change in the tropics for improving global climate model, this paper aims to detect the ENSO events, especially El Nino phase by using ground-based GPS receivers. Precipitable water vapor (PWV) obtained from the Global Positioning System (GPS) Meteorology measurements in line with the sea surface temperature anomaly (SSTa) are used to connect their response to El Niño activity. The data gathered from four selected stations over the Southeast Asia, namely PIMO (Philippines), KUAL (Malaysia), NTUS (Singapore) and BAKO (Indonesia) for the year of 2009/2010 were processed. A strong correlation was observed for PIMO station with a correlation coefficient of -0.90, significantly at the 99 % confidence level. In general, the relationship between GPS PWV and SSTa at all stations on a weekly basis showed with a negative correlation. The negative correlation indicates that during the El Niño event, the PWV variation was in decreased trend. Decreased trend of PWV value is caused by a dry season that affected the GPS signals in the ocean-atmospheric coupling. Based on these promising results, we can propose that the ground-based GPS receiver is capable used to monitor ENSO activity and this is a new prospective method that previously unexplored.
Levecke, Bruno; De Wilde, Nathalie; Vandenhoute, Els; Vercruysse, Jozef
2009-01-01
Background Soil-transmitted helminths, such as Trichuris trichiura, are of major concern in public health. Current efforts to control these helminth infections involve periodic mass treatment in endemic areas. Since these large-scale interventions are likely to intensify, monitoring the drug efficacy will become indispensible. However, studies comparing detection techniques based on sensitivity, fecal egg counts (FEC), feasibility for mass diagnosis and drug efficacy estimates are scarce. Methodology/Principal Findings In the present study, the ether-based concentration, the Parasep Solvent Free (SF), the McMaster and the FLOTAC techniques were compared based on both validity and feasibility for the detection of Trichuris eggs in 100 fecal samples of nonhuman primates. In addition, the drug efficacy estimates of quantitative techniques was examined using a statistical simulation. Trichuris eggs were found in 47% of the samples. FLOTAC was the most sensitive technique (100%), followed by the Parasep SF (83.0% [95% confidence interval (CI): 82.4–83.6%]) and the ether-based concentration technique (76.6% [95% CI: 75.8–77.3%]). McMaster was the least sensitive (61.7% [95% CI: 60.7–62.6%]) and failed to detect low FEC. The quantitative comparison revealed a positive correlation between the four techniques (Rs = 0.85–0.93; p<0.0001). However, the ether-based concentration technique and the Parasep SF detected significantly fewer eggs than both the McMaster and the FLOTAC (p<0.0083). Overall, the McMaster was the most feasible technique (3.9 min/sample for preparing, reading and cleaning of the apparatus), followed by the ether-based concentration technique (7.7 min/sample) and the FLOTAC (9.8 min/sample). Parasep SF was the least feasible (17.7 min/sample). The simulation revealed that the sensitivity is less important for monitoring drug efficacy and that both FLOTAC and McMaster were reliable estimators. Conclusions/Significance The results of this study demonstrated that McMaster is a promising technique when making use of FEC to monitor drug efficacy in Trichuris. PMID:19172171
Schroeder, Janina; Peterschroeder, Andreas; Vaske, Bernhard; Butz, Thomas; Barth, Peter; Oldenburg, Olaf; Bitter, Thomas; Burchert, Wolfgang; Horstkotte, Dieter; Langer, Christoph
2009-11-01
In humans with normal hearts multi-slice computed tomography (MSCT) based volumetry was shown to correlate well with the gold standard, cardiac magnetic resonance imaging (CMR). We correlated both techniques in patients with various degrees of heart failure and reduced ejection fraction (HFREF) resulting from cardiac dilatation. Twenty-four patients with a left ventricular enddiastolic volume (LV-EDV) of C 150 ml measured by angiography underwent MSCT and CMR scanning for left and right ventricular (LV, RV) volumetry. MSCT based short cardiac axis views were obtained beginning at the cardiac base advancing to the apex. These were reconstructed in 20 different time windows of the RR-interval (0-95%) serving for identification of enddiastole (ED) and end-systole (ES) and for planimetry. ED and ES volumes and the ejection fraction (EF) were calculated for LV and RV. MSCT based volumetry was compared with CMR. MSCT based LV volumetry significantly correlates with CMR as follows: LV-EDV r = 0.94, LV-ESV r = 0.98 and LV-EF r = 0.93, but significantly overestimates LV-EDV and LV-ESV and underestimates EF (P \\ 0.0001). MSCT based RV volumetry significantly correlates with CMR as follows: RV-EDV r = 0.79, RVESV r = 0.78 and RV-EF r = 0.73, but again significantly overestimates RV-EDV and RV-ESV and underestimates RV-EF (P \\ 0.0001). When compared with CMR a continuous overestimation of volumes and underestimation of EF needs to be considered when applying MSCT in HFREF patients.
Handwriting: Feature Correlation Analysis for Biometric Hashes
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Steinmetz, Ralf
2004-12-01
In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2002-06-01
Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.
Deformation Measurement In The Hayward Fault Zone Using Partially Correlated Persistent Scatterers
NASA Astrophysics Data System (ADS)
Lien, J.; Zebker, H. A.
2013-12-01
Interferometric synthetic aperture radar (InSAR) is an effective tool for measuring temporal changes in the Earth's surface. By combining SAR phase data collected at varying times and orbit geometries, with InSAR we can produce high accuracy, wide coverage images of crustal deformation fields. Changes in the radar imaging geometry, scatterer positions, or scattering behavior between radar passes causes the measured radar return to differ, leading to a decorrelation phase term that obscures the deformation signal and prevents the use of large baseline data. Here we present a new physically-based method of modeling decorrelation from the subset of pixels with the highest intrinsic signal-to-noise ratio, the so-called persistent scatters (PS). This more complete formulation, which includes both phase and amplitude scintillations, better describes the scattering behavior of partially correlated PS pixels and leads to a more reliable selection algorithm. The new method identifies PS pixels using maximum likelihood signal-to-clutter ratio (SCR) estimation based on the joint interferometric stack phase-amplitude distribution. Our PS selection method is unique in that it considers both phase and amplitude; accounts for correlation between all possible pairs of interferometric observations; and models the effect of spatial and temporal baselines on the stack. We use the resulting maximum likelihood SCR estimate as a criterion for PS selection. We implement the partially correlated persistent scatterer technique to analyze a stack of C-band European Remote Sensing (ERS-1/2) interferometric radar data imaging the Hayward Fault Zone from 1995 to 2000. We show that our technique achieves a better trade-off between PS pixel selection accuracy and network density compared to other PS identification methods, particularly in areas of natural terrain. We then present deformation measurements obtained by the selected PS network. Our results demonstrate that the partially correlated persistent scatterer technique can attain accurate deformation measurements even in areas that suffer decorrelation due to natural terrain. The accuracy of phase unwrapping and subsequent deformation estimation on the spatially sparse PS network depends on both pixel selection accuracy and the density of the network. We find that many additional pixels can be added to the PS list if we are able to correctly identify and add those in which the scattering mechanism exhibits partial, rather than complete, correlation across all radar scenes.
Using Dispersed Modes During Model Correlation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Hathcock, Megan L.
2017-01-01
The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.
Three-dimensional mapping of microcircuit correlation structure
Cotton, R. James; Froudarakis, Emmanouil; Storer, Patrick; Saggau, Peter; Tolias, Andreas S.
2013-01-01
Great progress has been made toward understanding the properties of single neurons, yet the principles underlying interactions between neurons remain poorly understood. Given that connectivity in the neocortex is locally dense through both horizontal and vertical connections, it is of particular importance to characterize the activity structure of local populations of neurons arranged in three dimensions. However, techniques for simultaneously measuring microcircuit activity are lacking. We developed an in vivo 3D high-speed, random-access two-photon microscope that is capable of simultaneous 3D motion tracking. This allows imaging from hundreds of neurons at several hundred Hz, while monitoring tissue movement. Given that motion will induce common artifacts across the population, accurate motion tracking is absolutely necessary for studying population activity with random-access based imaging methods. We demonstrate the potential of this imaging technique by measuring the correlation structure of large populations of nearby neurons in the mouse visual cortex, and find that the microcircuit correlation structure is stimulus-dependent. Three-dimensional random access multiphoton imaging with concurrent motion tracking provides a novel, powerful method to characterize the microcircuit activity in vivo. PMID:24133414
The coordination of ploidy and cell size differs between cell layers in leaves
Katagiri, Yohei; Hasegawa, Junko; Fujikura, Ushio; Hoshino, Rina; Matsunaga, Sachihiro; Tsukaya, Hirokazu
2016-01-01
Growth and developmental processes are occasionally accompanied by multiple rounds of DNA replication, known as endoreduplication. Coordination between endoreduplication and cell size regulation often plays a crucial role in proper organogenesis and cell differentiation. Here, we report that the level of correlation between ploidy and cell volume is different in the outer and inner cell layers of leaves of Arabidopsis thaliana using a novel imaging technique. Although there is a well-known, strong correlation between ploidy and cell volume in pavement cells of the epidermis, this correlation was extremely weak in palisade mesophyll cells. Induction of epidermis cell identity based on the expression of the homeobox gene ATML1 in mesophyll cells enhanced the level of correlation between ploidy and cell volume to near that of wild-type epidermal cells. We therefore propose that the correlation between ploidy and cell volume is regulated by cell identity. PMID:26903507
NASA Astrophysics Data System (ADS)
Biswas, Sayan; Qiao, Li
2017-03-01
A detailed statistical assessment of seedless velocity measurement using Schlieren Image Velocimetry (SIV) was explored using open source Robust Phase Correlation (RPC) algorithm. A well-known flow field, an axisymmetric turbulent helium jet, was analyzed near and intermediate region (0≤ x/d≤ 20) for two different Reynolds numbers, Re d = 11,000 and Re d = 22,000 using schlieren with horizontal knife-edge, schlieren with vertical knife-edge and shadowgraph technique, and the resulted velocity fields from SIV techniques were compared to traditional Particle Image Velocimetry (PIV) measurements. A novel, inexpensive, easy to setup two-camera SIV technique had been demonstrated to measure high-velocity turbulent jet, with jet exit velocities 304 m/s (Mach = 0.3) and 611 m/s (Mach = 0.6), respectively. Several image restoration and enhancement techniques were tested to improve signal to noise ratio (SNR) in schlieren and shadowgraph images. Processing and post-processing parameters for SIV techniques were examined in detail. A quantitative comparison between self-seeded SIV techniques and traditional PIV had been made using correlation statistics. While the resulted flow field from schlieren with horizontal knife-edge and shadowgraph showed excellent agreement with PIV measurements, schlieren with vertical knife-edge performed poorly. The performance of spatial cross-correlations at different jet locations using SIV techniques and PIV was evaluated. Turbulence quantities like turbulence intensity, mean velocity fields, Reynolds shear stress influenced spatial correlations and correlation plane SNR heavily. Several performance metrics such as primary peak ratio (PPR), peak to correlation energy (PCE), the probability distribution of signal and noise were used to compare capability and potential of different SIV techniques.
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stützer, Kristin; Haase, Robert; Exner, Florian
2016-09-15
Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less
NASA Astrophysics Data System (ADS)
Harding, J. W.; Small, J. W.; James, D. A.
2007-12-01
Recent analysis of elite-level half-pipe snowboard competition has revealed a number of sport specific key performance variables (KPV's) that correlate well to score. Information on these variables is difficult to acquire and analyse, relying on collection and labour intensive manual post processing of video data. This paper presents the use of inertial sensors as a user-friendly alternative and subsequently implements signal processing routines to ultimately provide automated, sport specific feedback to coaches and athletes. The author has recently shown that the key performance variables (KPV's) of total air-time (TAT) and average degree of rotation (ADR) achieved during elite half-pipe snowboarding competition show strong correlation with an athlete's subjectively judged score. Utilising Micro-Electrochemical System (MEMS) sensors (tri-axial accelerometers) this paper demonstrates that air-time (AT) achieved during half-pipe snowboarding can be detected and calculated accurately using basic signal processing techniques. Characterisation of the variations in aerial acrobatic manoeuvres and the associated calculation of exact degree of rotation (DR) achieved is a likely extension of this research. The technique developed used a two-pass method to detect locations of half-pipe snowboard runs using power density in the frequency domain and subsequently utilises a threshold based search algorithm in the time domain to calculate air-times associated with individual aerial acrobatic manoeuvres. This technique correctly identified the air-times of 100 percent of aerial acrobatic manoeuvres within each half-pipe snowboarding run (n = 92 aerial acrobatic manoeuvres from 4 subjects) and displayed a very strong correlation with a video based reference standard for air-time calculation (r = 0.78 +/- 0.08; p value < 0.0001; SEE = 0.08 ×/÷ 1.16; mean bias = -0.03 +/- 0.02s) (value +/- or ×/÷ 95% CL).
Abdel Rahman, Afaf S; Fahim, Nehal M A; El Sayed, Abeer A; El Hady, Soha A R; Ahmad, Yasser S
2005-01-01
Renal transplantation, in most countries, is based on human leukocyte antigen (HLA) matching of the donor kidney with the recipient. Traditional human leukocyte antigen matching is based on defining human leukocyte antigen specificities by antibodies utilizing cytotoxicity crossmatch techniques. Newer techniques have emerged, which challenge the accuracy of serological typing and crossmatching. We compared the results of the standard complement-dependent cytotoxicity crossmatch (CDCXM) with the anti-human globulin augmented cytotoxicity (AHG-CDC), and Flowcytometry crossmatch (FCXM) for the detection of anti-HLA antibodies in 150 pre-transplant patients. The development of post-transplantation sensitization was screened utilizing these three techniques within two weeks post-operative and correlated with rejection episodes. Comparison between the results of CDCXM and AHG-CDC in 150 recipients, revealed no significant correlation (P>0.05). When comparing these results with that of FCXM in 50 recipients a significant correlation was shown (P<0.05). Relative to CDCXM, the sensitivity of AHG-CDC was 100%, specificity 97.4%, positive predictive value 92.3%, and negative predictive value 100%. On the other hand, the sensitivity of FCXM was 100%, specificity 76.3%, positive predictive value 57.1%, and negative predictive value 100%. According to the results of CDCXM, AHG-CDC, and FCXM, no difference was detected between pre- and posttransplant anti-HLA sensitization within two weeks after the operation. Patients with negative cytotoxicity crossmatch (CDCXM and AHG-CDC) and positive FCXM may have an increased risk of early graft loss and may represent a relative contraindication to transplantation. Given the important theoretical advantages of FCXM over the CDC XM, further testing of the clinical relevance is warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristensen, L.; Dons, T.; Schioler, P.
1995-11-01
Correlation of wireline log data from the North Sea chalk reservoirs is frequently hampered by rather subtle log patterns in the chalk section due to the apparent monotonous nature of the chalk sediments, which may lead to ambiguous correlations. This study deals with a correlation technique based on an integration of biostratigraphic data, seismic interpretation, and wireline log correlation; this technique aims at producing a consistent reservoir subdivision that honors both the well data and the seismic data. This multidisciplinary approach has been used to subdivide and correlate the Maastrichtian chalk in the Dan field. The biostratigraphic subdivision is basedmore » on a new detailed dinoflagellate study of core samples from eight wells. Integrating the biostratigraphic results with three-dimensional seismic data allows recognition of four stratigraphic units within the Maastrichtian, bounded by assumed chronostratigraphic horizons. This subdivision is further refined by adding a seismic horizon and four horizons from wireline log correlations, establishing a total of nine reservoir units. The approximate chronostratigraphic nature of these units provides an improved interpretation of the depositional and structural patterns in this area. The three upper reservoir units pinch out and disappear in a northeasterly direction across the field. We interpret this stratal pattern as reflecting a relative sea level fall or regional basinal subsidence during the latest Maastrichtian, possibly combined with local synsedimentary uplift due to salt tectonics. Isochore maps indicate that the underlying six non-wedging units are unaffected by salt tectonics.« less
Early Oscillation Detection for Hybrid DC/DC Converter Fault Diagnosis
NASA Technical Reports Server (NTRS)
Wang, Bright L.
2011-01-01
This paper describes a novel fault detection technique for hybrid DC/DC converter oscillation diagnosis. The technique is based on principles of feedback control loop oscillation and RF signal modulations, and Is realized by using signal spectral analysis. Real-circuit simulation and analytical study reveal critical factors of the oscillation and indicate significant correlations between the spectral analysis method and the gain/phase margin method. A stability diagnosis index (SDI) is developed as a quantitative measure to accurately assign a degree of stability to the DC/DC converter. This technique Is capable of detecting oscillation at an early stage without interfering with DC/DC converter's normal operation and without limitations of probing to the converter.
Comparison of two target classification techniques
NASA Astrophysics Data System (ADS)
Chen, J. S.; Walton, E. K.
1986-01-01
Radar target classification techniques based on backscatter measurements in the resonance region (1.0-20.0 MHz) are discussed. Attention is given to two novel methods currently being tested at the radar range of Ohio State University. The methods include: (1) the nearest neighbor (NN) algorithm for determining the radar cross section (RCS) magnitude and range corrected phase at various operating frequencies; and (2) an inverse Fourier transformation of the complex multifrequency radar returns of the time domain, followed by cross correlation analysis. Comparisons are made of the performance of the two techniques as a function of signal-to-error noise ratio for different types of processing. The results of the comparison are discussed in detail.
Opto-electronic characterization of third-generation solar cells
Jenatsch, Sandra
2018-01-01
Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hongchang, E-mail: hongchang.wang@diamond.ac.uk; Kashyap, Yogesh; Sawhney, Kawal
2016-03-21
X-ray dark-field contrast tomography can provide important supplementary information inside a sample to the conventional absorption tomography. Recently, the X-ray speckle based technique has been proposed to provide qualitative two-dimensional dark-field imaging with a simple experimental arrangement. In this letter, we deduce a relationship between the second moment of scattering angle distribution and cross-correlation degradation of speckle and establish a quantitative basis of X-ray dark-field tomography using single directional speckle scanning technique. In addition, the phase contrast images can be simultaneously retrieved permitting tomographic reconstruction, which yields enhanced contrast in weakly absorbing materials. Such complementary tomography technique can allow systematicmore » investigation of complex samples containing both soft and hard materials.« less
Potential digitization/compression techniques for Shuttle video
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B. H.
1978-01-01
The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.
Thermo-mechanical toner transfer for high-quality digital image correlation speckle patterns
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Zappa, Emanuele; Matta, Fabio; Sutton, Michael A.
2015-12-01
The accuracy and spatial resolution of full-field deformation measurements performed through digital image correlation are greatly affected by the frequency content of the speckle pattern, which can be effectively controlled using particles with well-defined and consistent shape, size and spacing. This paper introduces a novel toner-transfer technique to impress a well-defined and repeatable speckle pattern on plane and curved surfaces of metallic and cement composite specimens. The speckle pattern is numerically designed, printed on paper using a standard laser printer, and transferred onto the measurement surface via a thermo-mechanical process. The tuning procedure to compensate for the difference between designed and toner-transferred actual speckle size is presented. Based on this evidence, the applicability of the technique is discussed with respect to surface material, dimensions and geometry. Proof of concept of the proposed toner-transfer technique is then demonstrated for the case of a quenched and partitioned welded steel plate subjected to uniaxial tensile loading, and for an aluminum plate exposed to temperatures up to 70% of the melting point of aluminum and past the melting point of typical printer toner powder.
Yang, M. H.; Li, J. H.; Liu, B. X.
2016-01-01
Based on the newly constructed n-body potential of Ni-Ti-Mo system, Molecular Dynamics and Monte Carlo simulations predict an energetically favored glass formation region and an optimal composition sub-region with the highest glass-forming ability. In order to compare the producing techniques between liquid melt quenching (LMQ) and solid-state amorphization (SSA), inherent hierarchical structure and its effect on mechanical property were clarified via atomistic simulations. It is revealed that both producing techniques exhibit no pronounced differences in the local atomic structure and mechanical behavior, while the LMQ method makes a relatively more ordered structure and a higher intrinsic strength. Meanwhile, it is found that the dominant short-order clusters of Ni-Ti-Mo metallic glasses obtained by LMQ and SSA are similar. By analyzing the structural evolution upon uniaxial tensile deformation, it is concluded that the gradual collapse of the spatial structure network is intimately correlated to the mechanical response of metallic glasses and acts as a structural signature of the initiation and propagation of shear bands. PMID:27418115
NASA Astrophysics Data System (ADS)
Clarke, James; Cheng, Kwan; Shindell, Orrin; Wang, Exing
We have designed and constructed a high-throughput electrofusion chamber and an incubator to fabricate Giant Unilamellar Vesicles (GUVs) consisting of high-melting lipids, low-melting lipids, cholesterol and both ordered and disordered phase sensitive fluorescent probes (DiIC12, dehydroergosterol and BODIPY-Cholesterol). GUVs were formed in a 3 stage pulse sequence electrofusion process with voltages ranging from 50mVpp to 2.2Vpp and frequencies from 5Hz to 10Hz. Steady state and time-correlated single-photon counting (TCSPC) fluorescence lifetime (FLIM) based confocal and/or multi-photon microscopic techniques were used to characterize phase separated lipid domains in GUVs. Confocal imaging measures the probe concentration and the chemical environment of the system. TCSPC techniques determine the chemical environment through the perturbation of fluorescent lifetimes of the probes in the system. The above techniques will be applied to investigate the protein-lipid interactions involving domain formation. Specifically, the mechanisms governing lipid domain formations in the above systems that mimic the lipid rafts in cells will be explored. Murchison Fellowship at Trinity University.
García-Montes, José M; Cangas, Adolfo; Pérez-Alvarez, M; Fidalgo, Angel M; Gutiérrez, Olga
2006-09-01
This study examines the relationship between a predisposition to hallucinations and meta-cognitive variables and thought-control techniques, controlling for the possible effect of anxiety. In order to do so, we start out with the hypothesis that anxiety does not, in itself, explain the association between meta-cognitions and a predisposition to auditory and visual hallucinations. A within-participants correlational design was employed. Four psychometric tests relating to predisposition to hallucinations, anxiety, meta-cognitions and thought-control techniques were administered to 150 participants. It was found that, after controlling for participants' anxiety levels, the 'loss of cognitive confidence' factor predicted the score on the scale of predisposition to both auditory and visual hallucinations. Thought-control strategies based on worry were also found to be predictive of a greater predisposition to hallucinations, regardless of whether or not participants' anxiety level was controlled. Meta-cognitive variables of cognitive confidence and thought control through worry are positively associated with a predisposition to hallucinations. The correlational nature of the design does not allow inferences about causal relationships.
NASA Technical Reports Server (NTRS)
Vazirani, P.
1995-01-01
The process of combining telemetry signals received at multiple antennas, commonly referred to as arraying, can be used to improve communication link performance in the Deep Space Network (DSN). By coherently adding telemetry from multiple receiving sites, arraying produces an enhancement in signal-to-noise ratio (SNR) over that achievable with any single antenna in the array. A number of different techniques for arraying have been proposed and their performances analyzed in past literature. These analyses have compared different arraying schemes under the assumption that the signals contain additive white Gaussian noise (AWGN) and that the noise observed at distinct antennas is independent. In situations where an unwanted background body is visible to multiple antennas in the array, however, the assumption of independent noises is no longer applicable. A planet with significant radiation emissions in the frequency band of interest can be one such source of correlated noise. For example, during much of Galileo's tour of Jupiter, the planet will contribute significantly to the total system noise at various ground stations. This article analyzes the effects of correlated noise on two arraying schemes currently being considered for DSN applications: full-spectrum combining (FSC) and complex-symbol combining (CSC). A framework is presented for characterizing the correlated noise based on physical parameters, and the impact of the noise correlation on the array performance is assessed for each scheme.
Yu, Xue; Lee, Elaine Yuen Phin; Lai, Vincent; Chan, Queenie
2014-07-01
To evaluate the correlation between standardized uptake value (SUV) (tissue metabolism) and apparent diffusion coefficient (ADC) (water diffusivity) in peritoneal metastases. Patients with peritoneal dissemination detected on (18)F-fluorodeoxyglucose positron emission tomography combined with computed tomography (FDG-PET/CT) were prospectively recruited for MRI examinations with informed consent and the study was approved by the local Institutional Review Board. FDG-PET/CT, diffusion-weighted imaging (DWI), MRI, and DWI/MRI images were independently reviewed by two radiologists based on visual analysis. SUVmax/SUVmean and ADCmin/ADCmean were obtained manually by drawing ROIs over the peritoneal metastases on FDG-PET/CT and DWI, respectively. Diagnostic characteristics of each technique were evaluated. Pearson's coefficient and McNemar and Kappa tests were used for statistical analysis. Eight patients were recruited for this prospective study and 34 peritoneal metastases were evaluated. ADCmean was significantly and negatively correlated with SUVmax (r = -0.528, P = 0.001) and SUVmean (r = -0.548, P = 0.001). ADCmin had similar correlation with SUVmax (r = -0.508, P = 0.002) and SUVmean (r = -0.513, P = 0.002). DWI/MRI had high diagnostic performance (accuracy = 98%) comparable to FDG-PET/CT, in peritoneal metastasis detection. Kappa values were excellent for all techniques. There was a significant inverse correlation between SUV and ADC. © 2013 Wiley Periodicals, Inc.
Deville, Sarah; Penjweini, Rozhin; Smisdom, Nick; Notelaers, Kristof; Nelissen, Inge; Hooyberghs, Jef; Ameloot, Marcel
2015-10-01
Novel insights in nanoparticle (NP) uptake routes of cells, their intracellular trafficking and subcellular targeting can be obtained through the investigation of their temporal and spatial behavior. In this work, we present the application of image (cross-) correlation spectroscopy (IC(C)S) and single particle tracking (SPT) to monitor the intracellular dynamics of polystyrene (PS) NPs in the human lung carcinoma A549 cell line. The ensemble kinetic behavior of NPs inside the cell was characterized by temporal and spatiotemporal image correlation spectroscopy (TICS and STICS). Moreover, a more direct interpretation of the diffusion and flow detected in the NP motion was obtained by SPT by monitoring individual NPs. Both techniques demonstrate that the PS NP transport in A549 cells is mainly dependent on microtubule-assisted transport. By applying spatiotemporal image cross-correlation spectroscopy (STICCS), the correlated motions of NPs with the early endosomes, late endosomes and lysosomes are identified. PS NPs were equally distributed among the endolysosomal compartment during the time interval of the experiments. The cotransport of the NPs with the lysosomes is significantly larger compared to the other cell organelles. In the present study we show that the complementarity of ICS-based techniques and SPT enables a consistent elaborate model of the complex behavior of NPs inside biological systems. Copyright © 2015 Elsevier B.V. All rights reserved.
Alpha trimmed correlation for touchless finger image mosaicing
NASA Astrophysics Data System (ADS)
Rao, Shishir P.; Rajendran, Rahul; Agaian, Sos S.; Mulawka, Marzena Mary Ann
2016-05-01
In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson's correlation [1], Kendall's correlation [2], and Spearman's correlation [2], to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu [3], K Choi [4], H Choi [5], and G Parziale [6]. In addition, we also compare various correlation techniques with the correlation method mentioned in [3] and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.
Bairy, Santhosh Kumar; Suneel Kumar, B V S; Bhalla, Joseph Uday Tej; Pramod, A B; Ravikumar, Muttineni
2009-04-01
c-Src kinase play an important role in cell growth and differentiation and its inhibitors can be useful for the treatment of various diseases, including cancer, osteoporosis, and metastatic bone disease. Three dimensional quantitative structure-activity relationship (3D-QSAR) studies were carried out on quinazolin derivatives inhibiting c-Src kinase. Molecular field analysis (MFA) models with four different alignment techniques, namely, GLIDE, GOLD, LIGANDFIT and Least squares based methods were developed. glide based MFA model showed better results (Leave one out cross validation correlation coefficient r(2)(cv) = 0.923 and non-cross validation correlation coefficient r(2)= 0.958) when compared with other models. These results help us to understand the nature of descriptors required for activity of these compounds and thereby provide guidelines to design novel and potent c-Src kinase inhibitors.
Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary
2015-02-07
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.
NASA Astrophysics Data System (ADS)
Waugh, Rachael C.; Dulieu-Barton, Janice M.; Quinn, S.
2015-03-01
Thermoelastic stress analysis (TSA) is an established active thermographic approach which uses the thermoelastic effect to correlate the temperature change that occurs as a material is subjected to elastic cyclic loading to the sum of the principal stresses on the surface of the component. Digital image correlation (DIC) tracks features on the surface of a material to establish a displacement field of a component subjected to load, which can then be used to calculate the strain field. The application of both DIC and TSA on a composite plate representative of aircraft secondary structure subject to resonant frequency loading using a portable loading device, i.e. `remote loading' is described. Laboratory based loading for TSA and DIC is typically imparted using a test machine, however in the current work a vibration loading system is used which is able to excite the component of interest at resonant frequency which enables TSA and DIC to be carried out. The accuracy of the measurements made under remote loading of both of the optical techniques applied is discussed. The data are compared to extract complimentary information from the two techniques. This work forms a step towards a combined strain based non-destructive evaluation procedure able to identify and quantify the effect of defects more fully, particularly when examining component performance in service applications.
Ariza, O; Gilchrist, S; Widmer, R P; Guy, P; Ferguson, S J; Cripton, P A; Helgason, B
2015-01-21
Current screening techniques based on areal bone mineral density (aBMD) measurements are unable to identify the majority of people who sustain hip fractures. Biomechanical examination of such events may help determine what predisposes a hip to be susceptible to fracture. Recently, drop-tower simulations of in-vitro sideways falls have allowed the study of the mechanical response of the proximal human femur at realistic impact speeds. This technique has created an opportunity to validate explicit finite element (FE) models against dynamic test data. This study compared the outcomes of 15 human femoral specimens fractured using a drop tower with complementary specimen-specific explicit FE analysis. Correlation coefficient and root mean square error (RMSE) were found to be moderate for whole bone stiffness comparison (R(2)=0.3476 and 22.85% respectively). No correlation was found between experimentally and computationally predicted peak force, however, energy absorption comparison produced moderate correlation and RMSE (R(2)=0.4781 and 29.14% respectively). By comparing predicted strain maps to high speed video data we demonstrated the ability of the FE models to detect vulnerable portions of the bones. Based on our observations, we conclude that there exists a need to extend the current apparent level material models for bone to cover higher strain rates than previously tested experimentally. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-11-08
The increasing number of recording electrodes enhances the capability of capturing the network's cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity.
Vardi, Roni; Goldental, Amir; Sardi, Shira; Sheinin, Anton; Kanter, Ido
2016-01-01
The increasing number of recording electrodes enhances the capability of capturing the network’s cooperative activity, however, using too many monitors might alter the properties of the measured neural network and induce noise. Using a technique that merges simultaneous multi-patch-clamp and multi-electrode array recordings of neural networks in-vitro, we show that the membrane potential of a single neuron is a reliable and super-sensitive probe for monitoring such cooperative activities and their detailed rhythms. Specifically, the membrane potential and the spiking activity of a single neuron are either highly correlated or highly anti-correlated with the time-dependent macroscopic activity of the entire network. This surprising observation also sheds light on the cooperative origin of neuronal burst in cultured networks. Our findings present an alternative flexible approach to the technique based on a massive tiling of networks by large-scale arrays of electrodes to monitor their activity. PMID:27824075
Exploitation of ERTS-1 imagery utilizing snow enhancement techniques
NASA Technical Reports Server (NTRS)
Wobber, F. J.; Martin, K. R.
1973-01-01
Photogeological analysis of ERTS-simulation and ERTS-1 imagery of snowcovered terrain within the ERAP Feather River site and within the New England (ERTS) test area provided new fracture detail which does not appear on available geological maps. Comparative analysis of snowfree ERTS-1 images has demonstrated that MSS Bands 5 and 7 supply the greatest amount of geological fracture detail. Interpretation of the first snow-covered ERTS-1 images in correlation with ground snow depth data indicates that a heavy blanket of snow (more than 9 inches) accentuates major structural features while a light "dusting", (less than 1 inch) accentuates more subtle topographic expressions. An effective mail-based method for acquiring timely ground-truth (snowdepth) information was established and provides a ready correlation of fracture detail with snow depth so as to establish the working limits of the technique. The method is both efficient and inexpensive compared with the cost of similarly scaled direct field observations.
Polynuclear aromatic hydrocarbon analysis using the synchronous scanning luminoscope
NASA Astrophysics Data System (ADS)
Hyfantis, George J., Jr.; Teglas, Matthew S.; Wilbourn, Robert G.
2001-02-01
12 The Synchronous Scanning Luminoscope (SSL) is a field- portable, synchronous luminescence spectrofluorometer that was developed for on-site analysis of contaminated soil and ground water. The SSL is capable of quantitative analysis of total polynuclear aromatic hydrocarbons (PAHs) using phosphorescence and fluorescence techniques with a high correlation to laboratory data as illustrated by this study. The SSL is also capable of generating benzo(a)pyrene equivalency results, based on seven carcinogenic PAHs and Navy risk numbers, with a high correlation to laboratory data as illustrated by this study. These techniques allow rapid field assessments of total PAHs and benzo(a)pyrene equivalent concentrations. The Luminoscope is capable of detecting total PAHs to the parts per billion range. This paper describes standard field methods for using the SSL and describes the results of field/laboratory testing of PAHs. SSL results from two different hazardous waste sites are discussed.
3D displacement field measurement with correlation based on the micro-geometrical surface texture
NASA Astrophysics Data System (ADS)
Bubaker-Isheil, Halima; Serri, Jérôme; Fontaine, Jean-François
2011-07-01
Image correlation methods are widely used in experimental mechanics to obtain displacement field measurements. Currently, these methods are applied using digital images of the initial and deformed surfaces sprayed with black or white paint. Speckle patterns are then captured and the correlation is performed with a high degree of accuracy to an order of 0.01 pixels. In 3D, however, stereo-correlation leads to a lower degree of accuracy. Correlation techniques are based on the search for a sub-image (or pattern) displacement field. The work presented in this paper introduces a new correlation-based approach for 3D displacement field measurement that uses an additional 3D laser scanner and a CMM (Coordinate Measurement Machine). Unlike most existing methods that require the presence of markers on the observed object (such as black speckle, grids or random patterns), this approach relies solely on micro-geometrical surface textures such as waviness, roughness and aperiodic random defects. The latter are assumed to remain sufficiently small thus providing an adequate estimate of the particle displacement. The proposed approach can be used in a wide range of applications such as sheet metal forming with large strains. The method proceeds by first obtaining cloud points using the 3D laser scanner mounted on a CMM. These points are used to create 2D maps that are then correlated. In this respect, various criteria have been investigated for creating maps consisting of patterns, which facilitate the correlation procedure. Once the maps are created, the correlation between both configurations (initial and moved) is carried out using traditional methods developed for field measurements. Measurement validation was conducted using experiments in 2D and 3D with good results for rigid displacements in 2D, 3D and 2D rotations.
Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.
Ji, L; Danuser, G
2005-12-01
We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.
Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data
NASA Technical Reports Server (NTRS)
Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.;
2016-01-01
We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
General Analytical Schemes for the Characterization of Pectin-Based Edible Gelled Systems
Haghighi, Maryam; Rezaei, Karamatollah
2012-01-01
Pectin-based gelled systems have gained increasing attention for the design of newly developed food products. For this reason, the characterization of such formulas is a necessity in order to present scientific data and to introduce an appropriate finished product to the industry. Various analytical techniques are available for the evaluation of the systems formulated on the basis of pectin and the designed gel. In this paper, general analytical approaches for the characterization of pectin-based gelled systems were categorized into several subsections including physicochemical analysis, visual observation, textural/rheological measurement, microstructural image characterization, and psychorheological evaluation. Three-dimensional trials to assess correlations among microstructure, texture, and taste were also discussed. Practical examples of advanced objective techniques including experimental setups for small and large deformation rheological measurements and microstructural image analysis were presented in more details. PMID:22645484
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
Ultrasound based computer-aided-diagnosis of kidneys for pediatric hydronephrosis
NASA Astrophysics Data System (ADS)
Cerrolaza, Juan J.; Peters, Craig A.; Martin, Aaron D.; Myers, Emmarie; Safdar, Nabile; Linguraru, Marius G.
2014-03-01
Ultrasound is the mainstay of imaging for pediatric hydronephrosis, though its potential as diagnostic tool is limited by its subjective assessment, and lack of correlation with renal function. Therefore, all cases showing signs of hydronephrosis undergo further invasive studies, like diuretic renogram, in order to assess the actual renal function. Under the hypothesis that renal morphology is correlated with renal function, a new ultrasound based computer-aided diagnosis (CAD) tool for pediatric hydronephrosis is presented. From 2D ultrasound, a novel set of morphological features of the renal collecting systems and the parenchyma, is automatically extracted using image analysis techniques. From the original set of features, including size, geometric and curvature descriptors, a subset of ten features are selected as predictive variables, combining a feature selection technique and area under the curve filtering. Using the washout half time (T1/2) as indicative of renal obstruction, two groups are defined. Those cases whose T1/2 is above 30 minutes are considered to be severe, while the rest would be in the safety zone, where diuretic renography could be avoided. Two different classification techniques are evaluated (logistic regression, and support vector machines). Adjusting the probability decision thresholds to operate at the point of maximum sensitivity, i.e., preventing any severe case be misclassified, specificities of 53%, and 75% are achieved, for the logistic regression and the support vector machine classifier, respectively. The proposed CAD system allows to establish a link between non-invasive non-ionizing imaging techniques and renal function, limiting the need for invasive and ionizing diuretic renography.
Relation between hardness and ultrasonic velocity on pipeline steel welded joints
NASA Astrophysics Data System (ADS)
Carreón, H.; Barrera, G.; Natividad, C.; Salazar, M.; Contreras, A.
2016-04-01
In general, the ultrasonic techniques have been used to determine the mechanical properties of materials based on their relationship with metallurgical characteristics. In this research work, the relationship between ultrasonic wave velocity, hardness and the microstructure of steel pipeline welded joints is investigated. Measurements of ultrasonic wave velocity were made as a function of the location across the weld. Hardness measurements were performed in an attempt to correlate with ultrasonic response. In addition, the coarse and dendritic grain structure of the weld material is extreme and unpredictably anisotropic. Thus, due to the acoustic anisotropy of the crystal, weld material of studied joints is anisotropic too. Such structure is no longer direction-independent to the ultrasonic wave propagation; therefore, the ultrasonic beam deflects and redirects and the wave front becomes distorted. Thus, the use of conventional ultrasonic testing techniques using fixed beam angles is very limited and the application of conventional ultrasonic phased array techniques becomes desirable. This technique is proposed to assist pipeline operators in estimating the hardness through ultrasonic measures to evaluate the susceptibility to stress sulphide cracking and hydrogen-induced cracking due to hard spots in steel pipeline welded joints in service. Sound wave velocity and hardness measurements have been carried out on a steel welded joint. For each section of the welding, weld bead, fusion zone, heat affected zone and base metal were found to correspond particular values of the ultrasound velocity. These results were correlated with electron microscopy observations of the microstructure and sectorial scan view of welded joints by ultrasonic phased array.
An analytical and experimental evaluation of a Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. A.; Cosby, R. M.
1976-01-01
An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.
[Development of operation patient security detection system].
Geng, Shu-Qin; Tao, Ren-Hai; Zhao, Chao; Wei, Qun
2008-11-01
This paper describes a patient security detection system developed with two dimensional bar codes, wireless communication and removal storage technique. Based on the system, nurses and correlative personnel check code wait operation patient to prevent the defaults. The tests show the system is effective. Its objectivity and currency are more scientific and sophisticated than current traditional method in domestic hospital.
Mercurio, Meagan D; Smith, Paul A
2008-07-23
Quantification of red grape tannin and red wine tannin using the methyl cellulose precipitable (MCP) tannin assay and the Adams-Harbertson (A-H) tannin assay were investigated. The study allowed for direct comparison between the repeatability of the assays and for the assessment of other practical considerations such as time efficiency, ease of practice, and throughput, and assessed the relationships between tannin quantification by both analytical techniques. A strong correlation between the two analytical techniques was observed when quantifying grape tannin (r(2) = 0.96), and a good correlation was observed for wine tannins (r(2) = 0.80). However, significant differences in the reported tannin values for the analytical techniques were observed (approximately 3-fold). To explore potential reasons for the difference, investigations were undertaken to determine how several variables influenced the final tannin quantification for both assays. These variables included differences in the amount of tannin precipitated (monitored by HPLC), assay matrix variables, and the monomers used to report the final values. The relationship between tannin quantification and wine astringency was assessed for the MCP and A-H tannin assays, and both showed strong correlations with perceived wine astringency (r(2) = 0.83 and r(2) = 0.90, respectively). The work described here gives guidance to those wanting to understand how the values between the two assays relate; however, a conclusive explanation for the differences in values between the MCP and A-H tannin assays remains unclear, and further work in this area is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang
In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less
Hoyos-Arbeláez, Jorge; Vázquez, Mario; Contreras-Calderón, José
2017-04-15
The growing interest in functional foods had led to the use of analytical techniques to quantify some properties, among which is the antioxidant capacity (AC). In order to identify and quantify this capacity, some techniques are used, based on synthetic radicals capture; and they are monitored by UV-vis spectrophotometry. Electrochemical techniques are emerging as alternatives, given some of the disadvantages faced by spectrophotometric methods such as the use of expensive reagent not environmentally friendly, undefined reaction time, long sample pretreatment, and low precision and sensitivity. This review focuses on the four most commonly used electrochemical techniques (cyclic voltammetry, differential pulse voltammetry, square wave voltammetry and chronoamperometry). Some of the applications to determine AC in foods and beverages are presented, as well as the correlation between both spectrophotometric and electrochemical techniques that have been reported. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon
2017-06-01
The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.
Efficient seeding and defragmentation of curvature streamlines for colonic polyp detection
NASA Astrophysics Data System (ADS)
Zhao, Lingxiao; Botha, Charl P.; Truyen, Roel; Vos, Frans M.; Post, Frits H.
2008-03-01
Many computer aided diagnosis (CAD) schemes have been developed for colon cancer detection using Virtual Colonoscopy (VC). In earlier work, we developed an automatic polyp detection method integrating flow visualization techniques, that forms part of the CAD functionality of an existing Virtual Colonoscopy pipeline. Curvature streamlines were used to characterize polyp surface shape. Features derived from curvature streamlines correlated highly with true polyp detections. During testing with a large number of patient data sets, we found that the correlation between streamline features and true polyps could be affected by noise and our streamline generation technique. The seeding and spacing constraints and CT noise could lead to streamline fragmentation, which reduced the discriminating power of our streamline features. In this paper, we present two major improvements of our curvature streamline generation. First, we adapted our streamline seeding strategy to the local surface properties and made the streamline generation faster. It generates a significantly smaller number of seeds but still results in a comparable and suitable streamline distribution. Second, based on our observation that longer streamlines are better surface shape descriptors, we improved our streamline tracing algorithm to produce longer streamlines. Our improved techniques are more effcient and also guide the streamline geometry to correspond better to colonic surface shape. These two adaptations support a robust and high correlation between our streamline features and true positive detections and lead to better polyp detection results.
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
NASA Technical Reports Server (NTRS)
Clem, Michelle M.; Woike, Mark R.
2013-01-01
The Aeronautical Sciences Project under NASA`s Fundamental Aeronautics Program is extremely interested in the development of novel measurement technologies, such as optical surface measurements in the internal parts of a flow path, for in situ health monitoring of gas turbine engines. In situ health monitoring has the potential to detect flaws, i.e. cracks in key components, such as engine turbine disks, before the flaws lead to catastrophic failure. In the present study, a cross-correlation imaging technique is investigated in a proof-of-concept study as a possible optical technique to measure the radial growth and strain field on an already cracked sub-scale turbine engine disk under loaded conditions in the NASA Glenn Research Center`s High Precision Rotordynamics Laboratory. The optical strain measurement technique under investigation offers potential fault detection using an applied high-contrast random speckle pattern and imaging the pattern under unloaded and loaded conditions with a CCD camera. Spinning the cracked disk at high speeds induces an external load, resulting in a radial growth of the disk of approximately 50.0-im in the flawed region and hence, a localized strain field. When imaging the cracked disk under static conditions, the disk will be undistorted; however, during rotation the cracked region will grow radially, thus causing the applied particle pattern to be .shifted`. The resulting particle displacements between the two images will then be measured using the two-dimensional cross-correlation algorithms implemented in standard Particle Image Velocimetry (PIV) software to track the disk growth, which facilitates calculation of the localized strain field. In order to develop and validate this optical strain measurement technique an initial proof-of-concept experiment is carried out in a controlled environment. Using PIV optimization principles and guidelines, three potential speckle patterns, for future use on the rotating disk, are developed and investigated in the controlled experiment. A range of known shifts are induced on the patterns; reference and data images are acquired before and after the induced shift, respectively, and the images are processed using the cross-correlation algorithms in order to determine the particle displacements. The effectiveness of each pattern at resolving the known shift is evaluated and discussed in order to choose the most suitable pattern to be implemented onto a rotating disk in the Rotordynamics Lab. Although testing on the rotating disk has not yet been performed, the driving principles behind the development of the present optical technique are based upon critical aspects of the future experiment, such as the amount of expected radial growth, disk analysis, and experimental design and are therefore addressed in the paper.
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
NASA Astrophysics Data System (ADS)
Taira, T.; Kato, A.
2013-12-01
A high-resolution Vp/Vs ratio estimate is one of the key parameters to understand spatial variations of composition and physical state within the Earth. Lin and Shearer (2007, BSSA) recently developed a methodology to obtain local Vp/Vs ratios in individual similar earthquake clusters, based on P- and S-wave differential times. A waveform cross-correlation approach is typically employed to measure those differential times for pairs of seismograms from similar earthquakes clusters, at narrow time windows around the direct P and S waves. This approach effectively collects P- and S-wave differential times and however requires the robust P- and S-wave time windows that are extracted based on either manually or automatically picked P- and S-phases. We present another technique to estimate P- and S-wave differential times by exploiting temporal properties of delayed time as a function of elapsed time on the seismograms with a moving-window cross-correlation analysis (e.g., Snieder, 2002, Phys. Rev. E; Niu et al. 2003, Nature). Our approach is based on the principle that the delayed time for the direct S wave differs from that for the direct P wave. Two seismograms aligned by the direct P waves from a pair of similar earthquakes yield that delayed times become zero around the direct P wave. In contrast, delayed times obtained from time windows including the direct S wave have non-zero value. Our approach, in principle, is capable of measuring both P- and S-wave differential times from single-component seismograms. In an ideal case, the temporal evolution of delayed time becomes a step function with its discontinuity at the onset of the direct S wave. The offset in the resulting step function would be the S-wave differential time, relative to the P-wave differential time as the two waveforms are aligned by the direct P wave. We apply our moving-window cross-correlation technique to the two different data sets collected at: 1) the Wakayama district, Japan and 2) the Geysers geothermal field, California. The both target areas are characterized by earthquake swarms that provide a number of similar events clusters. We use the following automated procedure to systematically analyze the two data sets: 1) the identification of the direct P arrivals by using an Akaike Information Criterion based phase picking algorithm introduced by Zhang and Thurber (2003, BSSA), 2) the waveform alignment by the P-wave with a waveform cross-correlation to obtain P-wave differential time, 3) the moving-time window analysis to estimate the S-differential time. Kato et al. (2010, GRL) have estimated the Vp/Vs ratios for a few similar earthquake clusters from the Wakayama data set, by a conventional approach to obtain differential times. We find that the resulting Vp/Vs ratios from our approach for the same earthquake clusters are comparable with those obtained from Kato et al. (2010, GRL). We show that the moving-window cross-correlation technique effectively measures both P- and S-wave differential times for the seismograms in which the clear P and S phases are not observed. We will show spatial distributions in Vp/Vs ratios in our two target areas.
Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang
2018-04-17
Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.
Cross-correlations between West Texas Intermediate crude oil and the stock markets of the BRIC
NASA Astrophysics Data System (ADS)
Ma, Feng; Wei, Yu; Huang, Dengshi; Zhao, Lin
2013-11-01
In this paper, we investigate the cross-correlation properties between West Texas Intermediate crude oil and the stock markets of the BRIC. We use not only the qualitative analysis of the cross-correlation test, but also take the quantitative analysis of the MF-DXA, confirming the cross-correlation relationship between West Texas Intermediate crude oil and the stock markets of the BRIC (Brazil, Russia, India and China) respectively, which have strongly multifractal features, and the cross-correlations are more strongly multifractal in the short term than in the long term. Furthermore, based on the multifractal spectrum, we also find the multifractality strength between the crude oil WTI and Chinese stock market is stronger than the multifractality strength of other pairs. Based on the Iraq war (Mar 20, 2003) and the Financial crisis in 2008, we divide sample period into four segments to research the degree of the multifractal (ΔH) and the market efficiency (and the risk). Finally, we employ the technique of the rolling window to calculate the time-varying EI (efficiency index) and dependent on the EI, we can easily observe the change of stock markets. Furthermore, we explore the relationship between bivariate cross-correlation exponents (Hxy(q)) and the generalized Hurst exponents.
Analysis of correlation structures in the Synechocystis PCC6803 genome.
Wu, Zuo-Bing
2014-12-01
Transfer of nucleotide strings in the Synechocystis sp. PCC6803 genome is investigated to exhibit periodic and non-periodic correlation structures by using the recurrence plot method and the phase space reconstruction technique. The periodic correlation structures are generated by periodic transfer of several substrings in long periodic or non-periodic nucleotide strings embedded in the coding regions of genes. The non-periodic correlation structures are generated by non-periodic transfer of several substrings covering or overlapping with the coding regions of genes. In the periodic and non-periodic transfer, some gaps divide the long nucleotide strings into the substrings and prevent their global transfer. Most of the gaps are either the replacement of one base or the insertion/reduction of one base. In the reconstructed phase space, the points generated from two or three steps for the continuous iterative transfer via the second maximal distance can be fitted by two lines. It partly reveals an intrinsic dynamics in the transfer of nucleotide strings. Due to the comparison of the relative positions and lengths, the substrings concerned with the non-periodic correlation structures are almost identical to the mobile elements annotated in the genome. The mobile elements are thus endowed with the basic results on the correlation structures. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Garza, Alejandro J.
Perhaps the most important approximations to the electronic structure problem in quantum chemistry are those based on coupled cluster and density functional theories. Coupled cluster theory has been called the ``gold standard'' of quantum chemistry due to the high accuracy that it achieves for weakly correlated systems. Kohn-Sham density functionals based on semilocal approximations are, without a doubt, the most widely used methods in chemistry and material science because of their high accuracy/cost ratio. The root of the success of coupled cluster and density functionals is their ability to efficiently describe the dynamic part of the electron correlation. However, both traditional coupled cluster and density functional approximations may fail catastrophically when substantial static correlation is present. This severely limits the applicability of these methods to a plethora of important chemical and physical problems such as, e.g., the description of bond breaking, transition states, transition metal-, lanthanide- and actinide-containing compounds, and superconductivity. In an attempt to tackle this problem, nonstandard (single-reference) coupled cluster-based techniques that aim to describe static correlation have been recently developed: pair coupled cluster doubles (pCCD) and singlet-paired coupled cluster doubles (CCD0). The ability to describe static correlation in pCCD and CCD0 comes, however, at the expense of important amounts of dynamic correlation so that the high accuracy of standard coupled cluster becomes unattainable. Thus, the reliable and efficient description of static and dynamic correlation in a simultaneous manner remains an open problem for quantum chemistry and many-body theory in general. In this thesis, different ways to combine pCCD and CCD0 with density functionals in order to describe static and dynamic correlation simultaneously (and efficiently) are explored. The combination of wavefunction and density functional methods has a long history in quantum chemistry (practical implementations have appeared in the literature since the 1970s). However, this kind of techniques have not achieved widespread use due to problems such as double counting of correlation and the symmetry dilemma--the fact that wavefunction methods respect the symmetries of Hamiltonian, while modern functionals are designed to work with broken symmetry densities. Here, particular mathematical features of pCCD and CCD0 are exploited to avoid these problems in an efficient manner. The two resulting families of approximations, denoted as pCCD+DFT and CCD0+DFT, are shown to be able to describe static and dynamic correlation in standard benchmark calculations. Furthermore, it is also shown that CCD0+DFT lends itself to combination with correlation from the direct random phase approximation (dRPA). Inclusion of dRPA in the long-range via the technique of range-separation allows for the description of dispersion correlation, the remaining part of the correlation. Thus, when combined with the dRPA, CCD0+DFT can account for all three-types of electron correlation that are necessary to accurately describe molecular systems. Lastly, applications of CCD0+DFT to actinide chemistry are considered in this work. The accuracy of CCD0+DFT for predicting equilibrium geometries and vibrational frequencies of actinide molecules and ions is assessed and compared to that of well-established quantum chemical methods. For this purpose, the f0 actinyl series (UO2 2+, NpO 23+, PuO24+, the isoelectronic NUN, and Thorium (ThO, ThO2+) and Nobelium (NoO, NoO2) oxides are studied. It is shown that the CCD0+DFT description of these species agrees with available experimental data and is comparable with the results given by the highest-level calculations that are possible for such heavy compounds while being, at least, an order of magnitude lower in computational cost.
Needlet estimation of cross-correlation between CMB lensing maps and LSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bianchini, Federico; Renzi, Alessandro; Marinucci, Domenico, E-mail: fbianchini@sissa.it, E-mail: renzi@mat.uniroma2.it, E-mail: marinucc@mat.uniroma2.it
In this paper we develop a novel needlet-based estimator to investigate the cross-correlation between cosmic microwave background (CMB) lensing maps and large-scale structure (LSS) data. We compare this estimator with its harmonic counterpart and, in particular, we analyze the bias effects of different forms of masking. In order to address this bias, we also implement a MASTER-like technique in the needlet case. The resulting estimator turns out to have an extremely good signal-to-noise performance. Our analysis aims at expanding and optimizing the operating domains in CMB-LSS cross-correlation studies, similarly to CMB needlet data analysis. It is motivated especially by nextmore » generation experiments (such as Euclid) which will allow us to derive much tighter constraints on cosmological and astrophysical parameters through cross-correlation measurements between CMB and LSS.« less
Correlation Filters for Detection of Cellular Nuclei in Histopathology Images.
Ahmad, Asif; Asif, Amina; Rajpoot, Nasir; Arif, Muhammad; Minhas, Fayyaz Ul Amir Afsar
2017-11-21
Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware. A cloud based webserver of the proposed method and its python implementation can be accessed at the following URL: http://faculty.pieas.edu.pk/fayyaz/software.html#corehist .
Patra, Abhilash; Jana, Subrata; Samal, Prasanjit
2018-04-07
The construction of meta generalized gradient approximations based on the density matrix expansion (DME) is considered as one of the most accurate techniques to design semilocal exchange energy functionals in two-dimensional density functional formalism. The exchange holes modeled using DME possess unique features that make it a superior entity. Parameterized semilocal exchange energy functionals based on the DME are proposed. The use of different forms of the momentum and flexible parameters is to subsume the non-uniform effects of the density in the newly constructed semilocal functionals. In addition to the exchange functionals, a suitable correlation functional is also constructed by working upon the local correlation functional developed for 2D homogeneous electron gas. The non-local effects are induced into the correlation functional by a parametric form of one of the newly constructed exchange energy functionals. The proposed functionals are applied to the parabolic quantum dots with a varying number of confined electrons and the confinement strength. The results obtained with the aforementioned functionals are quite satisfactory, which indicates why these are suitable for two-dimensional quantum systems.
NASA Astrophysics Data System (ADS)
Patra, Abhilash; Jana, Subrata; Samal, Prasanjit
2018-04-01
The construction of meta generalized gradient approximations based on the density matrix expansion (DME) is considered as one of the most accurate techniques to design semilocal exchange energy functionals in two-dimensional density functional formalism. The exchange holes modeled using DME possess unique features that make it a superior entity. Parameterized semilocal exchange energy functionals based on the DME are proposed. The use of different forms of the momentum and flexible parameters is to subsume the non-uniform effects of the density in the newly constructed semilocal functionals. In addition to the exchange functionals, a suitable correlation functional is also constructed by working upon the local correlation functional developed for 2D homogeneous electron gas. The non-local effects are induced into the correlation functional by a parametric form of one of the newly constructed exchange energy functionals. The proposed functionals are applied to the parabolic quantum dots with a varying number of confined electrons and the confinement strength. The results obtained with the aforementioned functionals are quite satisfactory, which indicates why these are suitable for two-dimensional quantum systems.
Bhatti, Mehwish Saba; Tang, Tong Boon; Chen, Hui Cheng
2018-04-09
In this study, we reported a new technique based on laser speckle flowgraphy to record the ocular blood flow in rabbits under deep anesthesia, and proposed parameters to characterize retinal ischemia. We applied the proposed technique to study the correlation of blood flow between the eyes of normal non-anesthetized animals, and to characterize the occlusion of the internal carotid artery (ICA) and external carotid artery (ECA). We established a correlation in blood flow between the eyes of non-anesthetized animals, and derived two new parameters, namely, the laterality index and vascular perfusion estimate (VPE). Our experimental results from 16 eyes (of 13 New Zealand white rabbits) showed a reduction in ocular blood flow with a significant decrease in the VPE after the occlusion of the ECA (p < 0.001). A low/minimal effect on blood flow was observed with the occlusion of the ICA. In conclusion, we demonstrated a means for the real-time measurement of the ocular blood flow in rabbits under deep anesthesia by using laser speckle flowgraphy and the VPE as an indicator of successful occlusion. The proposed technique might be applicable in quantifying the efficacy of new drugs and interventions for the treatment of retinal ischemia.
Shin, Junseob; Chen, Yu; Malhi, Harshawn; Chen, Frank; Yen, Jesse
2018-05-01
Degradation of image contrast caused by phase aberration, off-axis clutter, and reverberation clutter remains one of the most important problems in abdominal ultrasound imaging. Multiphase apodization with cross-correlation (MPAX) is a novel beamforming technique that enhances ultrasound image contrast by adaptively suppressing unwanted acoustic clutter. MPAX employs multiple pairs of complementary sinusoidal phase apodizations to intentionally introduce grating lobes that can be used to derive a weighting matrix, which mostly preserves the on-axis signals from tissue but reduces acoustic clutter contributions when multiplied with the beamformed radio-frequency (RF) signals. In this paper, in vivo performance of the MPAX technique was evaluated in abdominal ultrasound using data sets obtained from 10 human subjects referred for abdominal ultrasound at the USC Keck School of Medicine. Improvement in image contrast was quantified, first, by the contrast-to-noise ratio (CNR) and, second, by the rating of two experienced radiologists. The MPAX technique was evaluated for longitudinal and transverse views of the abdominal aorta, the inferior vena cava, the gallbladder, and the portal vein. Our in vivo results and analyses demonstrate the feasibility of the MPAX technique in enhancing image contrast in abdominal ultrasound and show potential for creating high contrast ultrasound images with improved target detectability and diagnostic confidence.
Correlation analysis of the physiological factors controlling fundamental voice frequency.
Atkinson, J E
1978-01-01
A technique has been developed to obtain a quantitative measure of correlation between electromyographic (EMG) activity of various laryngeal muscles, subglottal air pressure, and the fundamental frequency of vibration of the vocal folds (Fo). Data were collected and analyzed on one subject, a native speaker of American English. The results show that an analysis of this type can provide a useful measure of correlation between the physiological and acoustical events in speech and, furthermore, can yield detailed insights into the organization and nature of the speech production process. In particular, based on these results, a model is suggested of Fo control involving laryngeal state functions that seems to agree with present knowledge of laryngeal control and experimental evidence.
Electro-optic modulation for high-speed characterization of entangled photon pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.
In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Active ultrasonic cross-correlation flowmeters for mixed-phase pipe flows
NASA Astrophysics Data System (ADS)
Sheen, S. H.; Raptis, A. C.
Two ultrasonic flowmeters which employ the active cross-correlation technique and use a simple clamp-on transducer arrangement are discussed. The flowmeter for solid/liquid flows was tested over a wide range of coal concentration in water and oil. The measured velocity based on the peak position of the cross-correlation function is consistently higher by about 15% than the average velocity measured by flow diversion. The origin of the difference results mainly from the flow velocity profiles and the transit-time probability distribution. The flowmeter that can measure particle velocity in a solid/gas flow requires acoustic decoupling arrangement between two sensing stations. The measured velocity is mainly associated with the particles near the wall. Performance of both flowmeters is presented.
Electro-optic modulation for high-speed characterization of entangled photon pairs
Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.; ...
2015-11-10
In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less
Structure-based characterization of multiprotein complexes.
Wiederstein, Markus; Gruber, Markus; Frank, Karl; Melo, Francisco; Sippl, Manfred J
2014-07-08
Multiprotein complexes govern virtually all cellular processes. Their 3D structures provide important clues to their biological roles, especially through structural correlations among protein molecules and complexes. The detection of such correlations generally requires comprehensive searches in databases of known protein structures by means of appropriate structure-matching techniques. Here, we present a high-speed structure search engine capable of instantly matching large protein oligomers against the complete and up-to-date database of biologically functional assemblies of protein molecules. We use this tool to reveal unseen structural correlations on the level of protein quaternary structure and demonstrate its general usefulness for efficiently exploring complex structural relationships among known protein assemblies. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
Processing methods for photoacoustic Doppler flowmetry with a clinical ultrasound scanner
NASA Astrophysics Data System (ADS)
Bücking, Thore M.; van den Berg, Pim J.; Balabani, Stavroula; Steenbergen, Wiendelt; Beard, Paul C.; Brunker, Joanna
2018-02-01
Photoacoustic flowmetry (PAF) based on time-domain cross correlation of photoacoustic signals is a promising technique for deep tissue measurement of blood flow velocity. Signal processing has previously been developed for single element transducers. Here, the processing methods for acoustic resolution PAF using a clinical ultrasound transducer array are developed and validated using a 64-element transducer array with a -6 dB detection band of 11 to 17 MHz. Measurements were performed on a flow phantom consisting of a tube (580 μm inner diameter) perfused with human blood flowing at physiological speeds ranging from 3 to 25 mm / s. The processing pipeline comprised: image reconstruction, filtering, displacement detection, and masking. High-pass filtering and background subtraction were found to be key preprocessing steps to enable accurate flow velocity estimates, which were calculated using a cross-correlation based method. In addition, the regions of interest in the calculated velocity maps were defined using a masking approach based on the amplitude of the cross-correlation functions. These developments enabled blood flow measurements using a transducer array, bringing PAF one step closer to clinical applicability.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
Temperature of the plasmasphere from Van Allen Probes HOPE
NASA Astrophysics Data System (ADS)
Genestreti, K. J.; Goldstein, J.; Corley, G. D.; Farner, W.; Kistler, L. M.; Larsen, B. A.; Mouikis, C. G.; Ramnarace, C.; Skoug, R. M.; Turner, N. E.
2017-01-01
We introduce two novel techniques for estimating temperatures of very low energy space plasmas using, primarily, in situ data from an electrostatic analyzer mounted on a charged and moving spacecraft. The techniques are used to estimate proton temperatures during intervals where the bulk of the ion plasma is well below the energy bandpass of the analyzer. Both techniques assume that the plasma may be described by a one-dimensional E→×B→ drifting Maxwellian and that the potential field and motion of the spacecraft may be accounted for in the simplest possible manner, i.e., by a linear shift of coordinates. The first technique involves the application of a constrained theoretical fit to a measured distribution function. The second technique involves the comparison of total and partial-energy number densities. Both techniques are applied to Van Allen Probes Helium, Oxygen, Proton, and Electron (HOPE) observations of the proton component of the plasmasphere during two orbits on 15 January 2013. We find that the temperatures calculated from these two order-of-magnitude-type techniques are in good agreement with typical ranges of the plasmaspheric temperature calculated using retarding potential analyzer-based measurements—generally between 0.2 and 2 eV (2000-20,000 K). We also find that the temperature is correlated with L shell and hot plasma density and is negatively correlated with the cold plasma density. We posit that the latter of these three relationships may be indicative of collisional or wave-driven heating of the plasmasphere in the ring current overlap region. We note that these techniques may be easily applied to similar data sets or used for a variety of purposes.
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.; Harrison, F. Wallace; Obland, Michael D.; Ismail, Syed
2014-01-01
Global atmospheric carbon dioxide (CO2) measurements through the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) Decadal Survey recommended space mission are critical for improving our understanding of CO2 sources and sinks. IM-CW (Intensity Modulated Continuous Wave) lidar techniques are investigated as a means of facilitating CO2 measurements from space to meet the ASCENDS science requirements. In previous laboratory and flight experiments we have successfully used linear swept frequency modulation to discriminate surface lidar returns from intermediate aerosol and cloud contamination. Furthermore, high accuracy and precision ranging to the surface as well as to the top of intermediate clouds, which is a requirement for the inversion of the CO2 column-mixing ratio from the instrument optical depth measurements, has been demonstrated with the linear swept frequency modulation technique. We are concurrently investigating advanced techniques to help improve the auto-correlation properties of the transmitted waveform implemented through physical hardware to make cloud rejection more robust in special restricted scenarios. Several different carrier based modulation techniques are compared including orthogonal linear swept, orthogonal non-linear swept, and Binary Phase Shift Keying (BPSK). Techniques are investigated that reduce or eliminate sidelobes. These techniques have excellent auto-correlation properties while possessing a finite bandwidth (by way of a new cyclic digital filter), which will reduce bias error in the presence of multiple scatterers. Our analyses show that the studied modulation techniques can increase the accuracy of CO2 column measurements from space. A comparison of various properties such as signal to noise ratio (SNR) and time-bandwidth product are discussed.
Correlation mapping microscopy
NASA Astrophysics Data System (ADS)
McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh M.; Leahy, Martin J.
2015-03-01
Changes in the microcirculation are associated with conditions such as Raynauds disease. Current modalities used to assess the microcirculation such as nailfold capillaroscopy are limited due to their depth ambiguity. A correlation mapping technique was recently developed to extend the capabilities of Optical Coherence Tomography to generate depth resolved images of the microcirculation. Here we present the extension of this technique to microscopy modalities, including confocal microscopy. It is shown that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution.
Eiber, Matthias; Martinez-Möller, Axel; Souvatzoglou, Michael; Holzapfel, Konstantin; Pickhard, Anja; Löffelbein, Dennys; Santi, Ivan; Rummeny, Ernst J; Ziegler, Sibylle; Schwaiger, Markus; Nekolla, Stephan G; Beer, Ambros J
2011-09-01
In this study, the potential contribution of Dixon-based MR imaging with a rapid low-resolution breath-hold sequence, which is a technique used for MR-based attenuation correction (AC) for MR/positron emission tomography (PET), was evaluated for anatomical correlation of PET-positive lesions on a 3T clinical scanner compared to low-dose CT. This technique is also used in a recently installed fully integrated whole-body MR/PET system. Thirty-five patients routinely scheduled for oncological staging underwent (18)F-fluorodeoxyglucose (FDG) PET/CT and a 2-point Dixon 3-D volumetric interpolated breath-hold examination (VIBE) T1-weighted MR sequence on the same day. Two PET data sets reconstructed using attenuation maps from low-dose CT (PET(AC_CT)) or simulated MR-based segmentation (PET(AC_MR)) were evaluated for focal PET-positive lesions. The certainty for the correlation with anatomical structures was judged in the low-dose CT and Dixon-based MRI on a 4-point scale (0-3). In addition, the standardized uptake values (SUVs) for PET(AC_CT) and PET(AC_MR) were compared. Statistically, no significant difference could be found concerning anatomical localization for all 81 PET-positive lesions in low-dose CT compared to Dixon-based MR (mean 2.51 ± 0.85 and 2.37 ± 0.87, respectively; p = 0.1909). CT tended to be superior for small lymph nodes, bone metastases and pulmonary nodules, while Dixon-based MR proved advantageous for soft tissue pathologies like head/neck tumours and liver metastases. For the PET(AC_CT)- and PET(AC_MR)-based SUVs (mean 6.36 ± 4.47 and 6.31 ± 4.52, respectively) a nearly complete concordance with a highly significant correlation was found (r = 0.9975, p < 0.0001). Dixon-based MR imaging for MR AC allows for anatomical allocation of PET-positive lesions similar to low-dose CT in conventional PET/CT. Thus, this approach appears to be useful for future MR/PET for body regions not fully covered by diagnostic MRI due to potential time constraints.
A new slit lamp-based technique for anterior chamber angle estimation.
Gispets, Joan; Cardona, Genís; Tomàs, Núria; Fusté, Cèlia; Binns, Alison; Fortes, Miguel A
2014-06-01
To design and test a new noninvasive method for anterior chamber angle (ACA) estimation based on the slit lamp that is accessible to all eye-care professionals. A new technique (slit lamp anterior chamber estimation [SLACE]) that aims to overcome some of the limitations of the van Herick procedure was designed. The technique, which only requires a slit lamp, was applied to estimate the ACA of 50 participants (100 eyes) using two different slit lamp models, and results were compared with gonioscopy as the clinical standard. The Spearman nonparametric correlation between ACA values as determined by gonioscopy and SLACE were 0.81 (p < 0.001) and 0.79 (p < 0.001) for each slit lamp. Sensitivity values of 100 and 87.5% and specificity values of 75 and 81.2%, depending on the slit lamp used, were obtained for the SLACE technique as compared with gonioscopy (Spaeth classification). The SLACE technique, when compared with gonioscopy, displayed good accuracy in the detection of narrow angles, and it may be useful for eye-care clinicians without access to expensive alternative equipment or those who cannot perform gonioscopy because of legal constraints regarding the use of diagnostic drugs.
Image-based topology for sensor gridlocking and association
NASA Astrophysics Data System (ADS)
Stanek, Clay J.; Javidi, Bahram; Yanni, Philip
2002-07-01
Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
NASA Astrophysics Data System (ADS)
Khonina, S. N.; Karpeev, S. V.; Paranin, V. D.
2018-06-01
A technique for simultaneous detection of individual vortex states of the beams propagating in a randomly inhomogeneous medium is proposed. The developed optical system relies on the correlation method that is invariant to the beam wandering. The intensity distribution formed at the optical system output does not require digital processing. The proposed technique based on a multi-order phase diffractive optical element (DOE) is studied numerically and experimentally. The developed detection technique is used for the analysis of Laguerre-Gaussian vortex beams propagating under conditions of intense absorption, reflection, and scattering in transparent and opaque microparticles in aqueous suspensions. The performed experimental studies confirm the relevance of the vortex phase dependence of a laser beam under conditions of significant absorption, reflection, and scattering of the light.
Fluorescence hyperspectral imaging technique for foreign substance detection on fresh-cut lettuce.
Mo, Changyeun; Kim, Giyoung; Kim, Moon S; Lim, Jongguk; Cho, Hyunjeong; Barnaby, Jinyoung Yang; Cho, Byoung-Kwan
2017-09-01
Non-destructive methods based on fluorescence hyperspectral imaging (HSI) techniques were developed to detect worms on fresh-cut lettuce. The optimal wavebands for detecting the worms were investigated using the one-way ANOVA and correlation analyses. The worm detection imaging algorithms, RSI-I (492-626)/492 , provided a prediction accuracy of 99.0%. The fluorescence HSI techniques indicated that the spectral images with a pixel size of 1 × 1 mm had the best classification accuracy for worms. The overall results demonstrate that fluorescence HSI techniques have the potential to detect worms on fresh-cut lettuce. In the future, we will focus on developing a multi-spectral imaging system to detect foreign substances such as worms, slugs and earthworms on fresh-cut lettuce. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Closed-loop, pilot/vehicle analysis of the approach and landing task
NASA Technical Reports Server (NTRS)
Schmidt, D. K.; Anderson, M. R.
1985-01-01
Optimal-control-theoretic modeling and frequency-domain analysis is the methodology proposed to evaluate analytically the handling qualities of higher-order manually controlled dynamic systems. Fundamental to the methodology is evaluating the interplay between pilot workload and closed-loop pilot/vehicle performance and stability robustness. The model-based metric for pilot workload is the required pilot phase compensation. Pilot/vehicle performance and loop stability is then evaluated using frequency-domain techniques. When these techniques were applied to the flight-test data for thirty-two highly-augmented fighter configurations, strong correlation was obtained between the analytical and experimental results.
Progress in speckle-shift strain measurement
NASA Technical Reports Server (NTRS)
Lant, Christian T.; Barranger, John P.; Oberle, Lawrence G.; Greer, Lawrence C., III
1991-01-01
The Instrumentation and Control Technology Division of the Lewis Research Center has been developing an in-house capability to make one dimensional and two dimensional optical strain measurements on high temperature test specimens. The measurements are based on a two-beam speckle-shift technique. The development of composite materials for use in high temperature applications is generating interest in using the speckle-shift technique to measure strains on small diameter fibers and wires of various compositions. The results of preliminary speckle correlation tests on wire and fiber specimens are covered, and the advanced system currently under development is described.
Subspace techniques to remove artifacts from EEG: a quantitative analysis.
Teixeira, A R; Tome, A M; Lang, E W; Martins da Silva, A
2008-01-01
In this work we discuss and apply projective subspace techniques to both multichannel as well as single channel recordings. The single-channel approach is based on singular spectrum analysis(SSA) and the multichannel approach uses the extended infomax algorithm which is implemented in the opensource toolbox EEGLAB. Both approaches will be evaluated using artificial mixtures of a set of selected EEG signals. The latter were selected visually to contain as the dominant activity one of the characteristic bands of an electroencephalogram (EEG). The evaluation is performed both in the time and frequency domain by using correlation coefficients and coherence function, respectively.
The analysis of cable forces based on natural frequency
NASA Astrophysics Data System (ADS)
Suangga, Made; Hidayat, Irpan; Juliastuti; Bontan, Darwin Julius
2017-12-01
A cable is a flexible structural member that is effective at resisting tensile forces. Cables are used in a variety of structures that employ their unique characteristics to create efficient design tension members. The condition of the cable forces in the cable supported structure is an important indication of judging whether the structure is in good condition. Several methods have been developed to measure on site cable forces. Vibration technique using correlation between natural frequency and cable forces is a simple method to determine in situ cable forces, however the method need accurate information on the boundary condition, cable mass, and cable length. The natural frequency of the cable is determined using FFT (Fast Fourier Transform) Technique to the acceleration record of the cable. Based on the natural frequency obtained, the cable forces then can be determine by analytical or by finite element program. This research is focus on the vibration techniques to determine the cable forces, to understand the physical parameter effect of the cable and also modelling techniques to the natural frequency and cable forces.
NASA Astrophysics Data System (ADS)
Schmidt-Bocking, Horst
2008-05-01
The correlated many-particle dynamics in Coulombic systems, which is one of the unsolved fundamental problems in AMO-physics, can now be experimentally approached with so far unprecedented completeness and precision. The recent development of the COLTRIMS technique (COLd Target Recoil Ion Momentum Spectroscopy) provides a coincident multi-fragment imaging technique for eV and sub-eV fragment detection. In its completeness it is as powerful as the bubble chamber in high energy physics. In recent benchmark experiments quasi snapshots (duration as short as an atto-sec) of the correlated dynamics between electrons and nuclei has been made for atomic and molecular objects. This new imaging technique has opened a powerful observation window into the hidden world of many-particle dynamics. Recent multiple-ionization studies will be presented and the observation of correlated electron pairs will be discussed.
High-Speed Linear Raman Spectroscopy for Instability Analysis of a Bluff Body Flame
NASA Technical Reports Server (NTRS)
Kojima, Jun; Fischer, David
2013-01-01
We report a high-speed laser diagnostics technique based on point-wise linear Raman spectroscopy for measuring the frequency content of a CH4-air premixed flame stabilized behind a circular bluff body. The technique, which primarily employs a Nd:YLF pulsed laser and a fast image-intensified CCD camera, successfully measures the time evolution of scalar parameters (N2, O2, CH4, and H2O) in the vortex-induced flame instability at a data rate of 1 kHz. Oscillation of the V-shaped flame front is quantified through frequency analysis of the combustion species data and their correlations. This technique promises to be a useful diagnostics tool for combustion instability studies.
NASA Technical Reports Server (NTRS)
Bortner, M. H.; Alyea, F. N.; Grenda, R. N.; Liebling, G. R.; Levy, G. M.
1973-01-01
The feasibility of measuring atmospheric carbon monoxide from a remote platform using the correlation interferometry technique was considered. It has been determined that CO data can be obtained with an accuracy of 10 percent using this technique on the first overtone band of CO at 2.3 mu. That band has been found to be much more suitable than the stronger fundamental band at 4.6 mu. Calculations for both wavelengths are presented which illustrate the effects of atmospheric temperature profiles, inversion layers, ground temperature and emissivity, CO profile, reflectivity, and atmospheric pressure. The applicable radiative transfer theory on which these calculations are based is described together with the principles of the technique.
Fercher, A; Hitzenberger, C; Sticker, M; Zawadzki, R; Karamata, B; Lasser, T
2001-12-03
Dispersive samples introduce a wavelength dependent phase distortion to the probe beam. This leads to a noticeable loss of depth resolution in high resolution OCT using broadband light sources. The standard technique to avoid this consequence is to balance the dispersion of the sample byarrangingadispersive materialinthereference arm. However, the impact of dispersion is depth dependent. A corresponding depth dependent dispersion balancing technique is diffcult to implement. Here we present a numerical dispersion compensation technique for Partial Coherence Interferometry (PCI) and Optical Coherence Tomography (OCT) based on numerical correlation of the depth scan signal with a depth variant kernel. It can be used a posteriori and provides depth dependent dispersion compensation. Examples of dispersion compensated depth scan signals obtained from microscope cover glasses are presented.
Monitoring temperatures in coal conversion and combustion processes via ultrasound
NASA Astrophysics Data System (ADS)
Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.
1980-02-01
The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.
Functional overestimation due to spatial smoothing of fMRI data.
Liu, Peng; Calhoun, Vince; Chen, Zikuan
2017-11-01
Pearson correlation (simply correlation) is a basic technique for neuroimage function analysis. It has been observed that the spatial smoothing may cause functional overestimation, which however remains a lack of complete understanding. Herein, we present a theoretical explanation from the perspective of correlation scale invariance. For a task-evoked spatiotemporal functional dataset, we can extract the functional spatial map by calculating the temporal correlations (tcorr) of voxel timecourses against the task timecourse. From the relationship between image noise level (changed through spatial smoothing) and the tcorr map calculation, we show that the spatial smoothing causes a noise reduction, which in turn smooths the tcorr map and leads to a spatial expansion on neuroactivity blob estimation. Through numerical simulations and subject experiments, we show that the spatial smoothing of fMRI data may overestimate activation spots in the correlation functional map. Our results suggest a small spatial smoothing (with a smoothing kernel with a full width at half maximum (FWHM) of no more than two voxels) on fMRI data processing for correlation-based functional mapping COMPARISON WITH EXISTING METHODS: In extreme noiselessness, the correlation of scale-invariance property defines a meaningless binary tcorr map. In reality, a functional activity blob in a tcorr map is shaped due to the spoilage of image noise on correlative responses. We may reduce data noise level by smoothing processing, which poses a smoothing effect on correlation. This logic allows us to understand the noise dependence and the smoothing effect of correlation-based fMRI data analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, B.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, D.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berbeco, R.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keall, P.
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less
Willis, Amber B; Walters, Lynda H; Crane, D Russell
2014-07-01
This exploratory, observational study was designed to reveal descriptive information regarding therapists' actual practices with preschool- and school-aged children in a single session of family therapy and to investigate change mechanisms in family play therapy that have been proposed to make this approach effective. A purposive sample of 30 families receiving family therapy was recruited and video-taped during a family session where at least one child between the ages of 4 and 12 was present. Following the session, the therapist and parent(s) completed questionnaires while one of the children (aged 4-12) was interviewed. Session recordings were coded, minute-by-minute, for participant talk time, visual aids or props used, and therapy technique type (e.g., play-based/activity vs. talk-only techniques). Hierarchical regression and canonical correlational analyses revealed evidence supporting the theory that play-based techniques promote young children's participation, enhance the quality of the child-therapist relationship, and build positive emotional experiences in family therapy. © 2013 American Association for Marriage and Family Therapy.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Study of photon correlation techniques for processing of laser velocimeter signals
NASA Technical Reports Server (NTRS)
Mayo, W. T., Jr.
1977-01-01
The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.
NASA Astrophysics Data System (ADS)
Flannery, D.; Keller, P.; Cartwright, S.; Loomis, J.
1987-06-01
Attractive correlation system performance potential is possible using magneto-optic spatial light modulators (SLM) to implement binary phase-only reference filters at high rates, provided the correlation performance of such reduced-information-content filters is adequate for the application. In the case studied here, the desired filter impulse response is a rectangular shape, which cannot be achieved with the usual binary phase-only filter formulation. The correlation application problem is described and techniques for synthesizing improved filter impulse response are considered. A compromise solution involves the cascading of a fixed amplitude-only weighting mask with the binary phase-only SLM. Based on simulations presented, this approach provides improved impulse responses and good correlation performance, while retaining the critical feature of real-time variations of the size, shape, and orientation of the rectangle by electronic programming of the phase pattern in the SLM. Simulations indicate that, for at least one very challenging input scene clutter situation, these filters provide higher correlation signal-to-noise than does "ideal" correlation, i.e. using a perfect rectangle filter response.
Understanding Zeeman EIT Noise Correlation Spectra in Buffered Rb Vapor
NASA Astrophysics Data System (ADS)
O'Leary, Shannon; Zheng, Aojie; Crescimanno, Michael
2014-05-01
Noise correlation spectroscopy on systems manifesting Electromagnetically Induced Transparency (EIT) holds promise as a simple, robust method for performing high-resolution spectroscopy used in applications such as EIT-based atomic magnetometry and clocks. During laser light's propagation through a resonant medium, interaction with the medium converts laser phase noise into intensity noise. While this noise conversion can diminish the precision of EIT applications, noise correlation techniques transform the noise into a useful spectroscopic tool that can improve the application's precision. Using a single diode laser with large phase noise, we examine laser intensity noise and noise correlations from Zeeman EIT in a buffered Rb vapor. Of particular interest is a narrow noise correlation feature, resonant with EIT, that has been shown in earlier work to be power-broadening resistant at low powers. We report here on our recent experimental work and complementary theoretical modeling on EIT noise spectra, including a study of power broadening of the narrow noise correlation feature. Understanding the nature of the noise correlation spectrum is essential for optimizing EIT-noise applications.
Ultrasonic nondestructive evaluation, microstructure, and mechanical property interrelations
NASA Technical Reports Server (NTRS)
Vary, A.
1984-01-01
Ultrasonic techniques for mechanical property characterizations are reviewed and conceptual models are advanced for explaining and interpreting the empirically based results. At present, the technology is generally empirically based and is emerging from the research laboratory. Advancement of the technology will require establishment of theoretical foundations for the experimentally observed interrelations among ultrasonic measurements, mechanical properties, and microstructure. Conceptual models are applied to ultrasonic assessment of fracture toughness to illustrate an approach for predicting correlations found among ultrasonic measurements, microstructure, and mechanical properties.
A real-time spectroscopic sensor for monitoring laser welding processes.
Sibillano, Teresa; Ancona, Antonio; Berardi, Vincenzo; Lugarà, Pietro Mario
2009-01-01
In this paper we report on the development of a sensor for real time monitoring of laser welding processes based on spectroscopic techniques. The system is based on the acquisition of the optical spectra emitted from the laser generated plasma plume and their use to implement an on-line algorithm for both the calculation of the plasma electron temperature and the analysis of the correlations between selected spectral lines. The sensor has been patented and it is currently available on the market.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
In-vivo study of blood flow in capillaries using μPIV method
NASA Astrophysics Data System (ADS)
Kurochkin, Maxim A.; Fedosov, Ivan V.; Tuchin, Valery V.
2014-01-01
A digital optical system for intravital capillaroscopy has been developed. It implements the particle image velocimetry (PIV) based approach for measurements of red blood cells velocity in individual capillary of human nailfold. We propose to use a digital real time stabilization technique for compensation of impact of involuntary movements of a finger on results of measurements. Image stabilization algorithm is based on correlation of feature tracking. The efficiency of designed image stabilization algorithm was experimentally demonstrated.
Siegel, Nisan; Storrie, Brian; Bruce, Marc
2016-01-01
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443
Experimental evaluation of tailored chordwise deformable box beam and correlation with theory
NASA Technical Reports Server (NTRS)
Rehfield, Lawrence W.; Zischka, Peter J.; Chang, Stephen; Fentress, Michael L.; Ambur, Damodar R.
1993-01-01
This paper describes an experimental methodology based upon the use of a flexible sling support and load application system that has been created and utilized to evaluate a box beam which incorporates an elastic tailoring technology. The design technique used here for elastically tailoring the composite box beam structure is to produce exaggerated chordwise camber deformation of substantial magnitude to be of practical use in the new composite aircraft wings. The traditional methods such as a four-point bend test to apply constant bending moment with rigid fixtures inhibits the designed chordwise deformation from occurring and, hence, the need for the new test method. The experimental results for global camber and spanwise bending compliances correlate well with theoretical predictions based on a beam-like model.
Relating chamber measurements to eddy correlation measurements of methane flux
R.J. Clement; S.B. Verma; E.S. Verry
1995-01-01
Methane fluxes were measured using eddy correlation and chamber techniques during 1991 and 1997 at a peatland in north central Minnesota. Comparisons of the two techniques were made using averages of methane flux data available during 1-week periods. The seasonal patterns of fluxes measured by the two techniques compared well. Chamber flux, in 1991, was about 1.8 mg m...
Hoffmann, Alexandra; Bleser, Gabriele
2017-01-01
Background Chronic stress has been shown to be associated with disease. This link is not only direct but also indirect through harmful health behavior such as smoking or changing eating habits. The recent mHealth trend offers a new and promising approach to support the adoption and maintenance of appropriate stress management techniques. However, only few studies have dealt with the inclusion of evidence-based content within stress management apps for mobile phones. Objective The aim of this study was to evaluate stress management apps on the basis of a new taxonomy of effective emotion-focused stress management techniques and an established taxonomy of behavior change techniques. Methods Two trained and independent raters evaluated 62 free apps found in Google Play with regard to 26 behavior change and 15 emotion-focused stress management techniques in October 2015. Results The apps included an average of 4.3 behavior change techniques (SD 4.2) and 2.8 emotion-focused stress management techniques (SD 2.6). The behavior change technique score and stress management technique score were highly correlated (r=.82, P=.01). Conclusions The broad variation of different stress management strategies found in this sample of apps goes in line with those found in conventional stress management interventions and self-help literature. Moreover, this study provided a first step toward more detailed and standardized taxonomies, which can be used to investigate evidence-based content in stress management interventions and enable greater comparability between different intervention types. PMID:28232299
Coupling Analysis of Heat Island Effects, Vegetation Coverage and Urban Flood in Wuhan
NASA Astrophysics Data System (ADS)
Liu, Y.; Liu, Q.; Fan, W.; Wang, G.
2018-04-01
In this paper, satellite image, remote sensing technique and geographic information system technique are main technical bases. Spectral and other factors comprehensive analysis and visual interpretation are main methods. We use GF-1 and Landsat8 remote sensing satellite image of Wuhan as data source, and from which we extract vegetation distribution, urban heat island relative intensity distribution map and urban flood submergence range. Based on the extracted information, through spatial analysis and regression analysis, we find correlations among heat island effect, vegetation coverage and urban flood. The results show that there is a high degree of overlap between of urban heat island and urban flood. The area of urban heat island has buildings with little vegetation cover, which may be one of the reasons for the local heavy rainstorms. Furthermore, the urban heat island has a negative correlation with vegetation coverage, and the heat island effect can be alleviated by the vegetation to a certain extent. So it is easy to understand that the new industrial zones and commercial areas which under constructions distribute in the city, these land surfaces becoming bare or have low vegetation coverage, can form new heat islands easily.
Shi, Yuanyuan; Qiu, Juan; Li, Rendong; Shen, Qiang; Huang, Duan
2017-01-01
Schistosomiasis japonica is an infectious disease caused by Schistosoma japonicum, and it remains endemic in China. Flooding is the main hazard factor, as it causes the spread of Oncomelania hupensis, the only intermediate host of Schistosoma japonicum, thereby triggering schistosomiasis outbreaks. Based on multi-source real-time remote sensing data, we used remote sensing (RS) technology, especially synthetic aperture radar (SAR), and geographic information system (GIS) techniques to carry out warning research on potential snail habitats within the snail dispersal range following flooding. Our research result demonstrated: (1) SAR data from Sentinel-1A before and during a flood were used to identify submerged areas rapidly and effectively; (2) the likelihood of snail survival was positively correlated with the clay proportion, core area standard deviation, and ditch length but negatively correlated with the wetness index, NDVI (normalized difference vegetation index), elevation, woodland area, and construction land area; (3) the snail habitats were most abundant near rivers and ditches in paddy fields; (4) the rivers and paddy irrigation ditches in the submerged areas must be the focused of mitigation efforts following future floods. PMID:28867814
Shi, Yuanyuan; Qiu, Juan; Li, Rendong; Shen, Qiang; Huang, Duan
2017-08-30
Schistosomiasis japonica is an infectious disease caused by Schistosoma japonicum , and it remains endemic in China. Flooding is the main hazard factor, as it causes the spread of Oncomelania hupensis , the only intermediate host of Schistosoma japonicum , thereby triggering schistosomiasis outbreaks. Based on multi-source real-time remote sensing data, we used remote sensing (RS) technology, especially synthetic aperture radar (SAR), and geographic information system (GIS) techniques to carry out warning research on potential snail habitats within the snail dispersal range following flooding. Our research result demonstrated: (1) SAR data from Sentinel-1A before and during a flood were used to identify submerged areas rapidly and effectively; (2) the likelihood of snail survival was positively correlated with the clay proportion, core area standard deviation, and ditch length but negatively correlated with the wetness index, NDVI (normalized difference vegetation index), elevation, woodland area, and construction land area; (3) the snail habitats were most abundant near rivers and ditches in paddy fields; (4) the rivers and paddy irrigation ditches in the submerged areas must be the focused of mitigation efforts following future floods.
Multiple speckle illumination for optical-resolution photoacoustic imaging
NASA Astrophysics Data System (ADS)
Poisson, Florian; Stasio, Nicolino; Moser, Christophe; Psaltis, Demetri; Bossy, Emmanuel
2017-03-01
Optical-resolution photoacoustic microscopy offers exquisite and specific contrast to optical absorption. Conventional approaches generally involves raster scanning a focused spot over the sample. Here, we demonstrate that a full-field illumination approach with multiple speckle illumination can also provide diffraction-limited optical-resolution photoacoustic images. Two different proof-of-concepts are demonstrated with micro-structured test samples. The first approach follows the principle of correlation/ghost imaging,1, 2 and is based on cross-correlating photoacoustic signals under multiple speckle illumination with known speckle patterns measured during a calibration step. The second approach is a speckle scanning microscopy technique, which adapts the technique proposed in fluorescence microscopy by Bertolotti and al.:3 in our work, spatially unresolved photoacoustic measurements are performed for various translations of unknown speckle patterns. A phase-retrieval algorithm is used to reconstruct the object from the knowledge of the modulus of its Fourier Transform yielded by the measurements. Because speckle patterns naturally appear in many various situations, including propagation through biological tissue or multi-mode fibers (for which focusing light is either very demanding if not impossible), speckle-illumination-based photoacoustic microscopy provides a powerful framework for the development of novel reconstruction approaches, well-suited to compressed sensing approaches.2
Wen, Weiping; Kalkan, Erol
2017-01-01
Deconvolution and cross‐correlation techniques are used for system identification of a 20‐story steel, moment‐resisting frame building in downtown Anchorage, Alaska. This regular‐plan midrise structure is instrumented with a 32‐channel accelerometer array at 10 levels. The impulse response functions (IRFs) and correlation functions (CFs) are computed based on waveforms recorded from ambient vibrations and five local and regional earthquakes. The earthquakes occurred from 2005 to 2014 with moment magnitudes between 4.7 and 6.2 over a range of azimuths at epicenter distances of 13.3–183 km. The building’s fundamental frequencies and mode shapes are determined using a complex mode indicator function based on singular value decomposition of multiple reference frequency‐response functions. The traveling waves, identified in IRFs with a virtual source at the roof, and CFs are used to estimate the intrinsic attenuation associated with the fundamental modes and shear‐wave velocity in the building. Although the cross correlation of the waveforms at various levels with the corresponding waveform at the first floor provides more complicated wave propagation than that from the deconvolution with virtual source at the roof, the shear‐wave velocities identified by both techniques are consistent—the largest difference in average values is within 8%. The median shear‐wave velocity from the IRFs of five earthquakes is 191 m/s for the east–west (E‐W), 205 m/s for the north–south (N‐S), and 176 m/s for the torsional responses. The building’s average intrinsic‐damping ratio is estimated to be 3.7% and 3.4% in the 0.2–1 Hz frequency band for the E‐W and N‐S directions, respectively. These results are intended to serve as reference for the undamaged condition of the building, which may be used for tracking changes in structural integrity during and after future earthquakes.
Söhn, Matthias; Alber, Markus; Yan, Di
2007-09-01
The variability of dose-volume histogram (DVH) shapes in a patient population can be quantified using principal component analysis (PCA). We applied this to rectal DVHs of prostate cancer patients and investigated the correlation of the PCA parameters with late bleeding. PCA was applied to the rectal wall DVHs of 262 patients, who had been treated with a four-field box, conformal adaptive radiotherapy technique. The correlated changes in the DVH pattern were revealed as "eigenmodes," which were ordered by their importance to represent data set variability. Each DVH is uniquely characterized by its principal components (PCs). The correlation of the first three PCs and chronic rectal bleeding of Grade 2 or greater was investigated with uni- and multivariate logistic regression analyses. Rectal wall DVHs in four-field conformal RT can primarily be represented by the first two or three PCs, which describe approximately 94% or 96% of the DVH shape variability, respectively. The first eigenmode models the total irradiated rectal volume; thus, PC1 correlates to the mean dose. Mode 2 describes the interpatient differences of the relative rectal volume in the two- or four-field overlap region. Mode 3 reveals correlations of volumes with intermediate doses ( approximately 40-45 Gy) and volumes with doses >70 Gy; thus, PC3 is associated with the maximal dose. According to univariate logistic regression analysis, only PC2 correlated significantly with toxicity. However, multivariate logistic regression analysis with the first two or three PCs revealed an increased probability of bleeding for DVHs with more than one large PC. PCA can reveal the correlation structure of DVHs for a patient population as imposed by the treatment technique and provide information about its relationship to toxicity. It proves useful for augmenting normal tissue complication probability modeling approaches.
Correlation of Three Techniques for Determining Soil Permeability
ERIC Educational Resources Information Center
Winneberger, John T.
1974-01-01
Discusses problems of acquiring adequate results when measuring for soil permeability. Correlates three relatively simple techniques that could be helpful to the inexperienced technician dealing with septic tank practices. An appendix includes procedures for valid percolation tests. (MLB)
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...
2015-11-27
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less