Agrawal, Yuvraj; Desai, Aravind; Mehta, Jaysheel
2011-12-01
We aimed to quantify the severity of the hallux valgus based on the lateral sesamoid position and to establish a correlation of our simple assessment method with the conventional radiological assessments. We reviewed one hundred and twenty two dorso-plantar weight bearing radiographs of feet. The intermetatarsal and hallux valgus angles were measured by the conventional methods; and the position of lateral sesamoid in relation to first metatarsal neck was assessed by our new and simple method. Significant correlation was noted between intermetatarsal angle and lateral sesamoid position (Rho 0.74, p < 0.0001); lateral sesamoid position and hallux valgus angle (Rho 0.56, p < 0.0001). Similar trends were noted in different grades of severity of hallux valgus in all the three methods of assessment. Our method of assessing hallux valgus deformity based on the lateral sesamoid position is simple, less time consuming and has statistically significant correlation with that of the established conventional radiological measurements. Copyright © 2011 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
AlBarakati, SF; Kula, KS; Ghoneima, AA
2012-01-01
Objective The aim of this study was to assess the reliability and reproducibility of angular and linear measurements of conventional and digital cephalometric methods. Methods A total of 13 landmarks and 16 skeletal and dental parameters were defined and measured on pre-treatment cephalometric radiographs of 30 patients. The conventional and digital tracings and measurements were performed twice by the same examiner with a 6 week interval between measurements. The reliability within the method was determined using Pearson's correlation coefficient (r2). The reproducibility between methods was calculated by paired t-test. The level of statistical significance was set at p < 0.05. Results All measurements for each method were above 0.90 r2 (strong correlation) except maxillary length, which had a correlation of 0.82 for conventional tracing. Significant differences between the two methods were observed in most angular and linear measurements except for ANB angle (p = 0.5), angle of convexity (p = 0.09), anterior cranial base (p = 0.3) and the lower anterior facial height (p = 0.6). Conclusion In general, both methods of conventional and digital cephalometric analysis are highly reliable. Although the reproducibility of the two methods showed some statistically significant differences, most differences were not clinically significant. PMID:22184624
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-05-29
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of -0.27 and -0.71 m · s -1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: -0.12 versus -0.26 m · s -1 ). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
NASA Astrophysics Data System (ADS)
Nauleau, Pierre; Apostolakis, Iason; McGarry, Matthew; Konofagou, Elisa
2018-06-01
The stiffness of the arteries is known to be an indicator of the progression of various cardiovascular diseases. Clinically, the pulse wave velocity (PWV) is used as a surrogate for arterial stiffness. Pulse wave imaging (PWI) is a non-invasive, ultrasound-based imaging technique capable of mapping the motion of the vessel walls, allowing the local assessment of arterial properties. Conventionally, a distinctive feature of the displacement wave (e.g. the 50% upstroke) is tracked across the map to estimate the PWV. However, the presence of reflections, such as those generated at the carotid bifurcation, can bias the PWV estimation. In this paper, we propose a two-step cross-correlation based method to characterize arteries using the information available in the PWI spatio-temporal map. First, the area under the cross-correlation curve is proposed as an index for locating the regions of different properties. Second, a local peak of the cross-correlation function is tracked to obtain a less biased estimate of the PWV. Three series of experiments were conducted in phantoms to evaluate the capabilities of the proposed method compared with the conventional method. In the ideal case of a homogeneous phantom, the two methods performed similarly and correctly estimated the PWV. In the presence of reflections, the proposed method provided a more accurate estimate than conventional processing: e.g. for the soft phantom, biases of ‑0.27 and ‑0.71 m · s–1 were observed. In a third series of experiments, the correlation-based method was able to locate two regions of different properties with an error smaller than 1 mm. It also provided more accurate PWV estimates than conventional processing (biases: ‑0.12 versus ‑0.26 m · s–1). Finally, the in vivo feasibility of the proposed method was demonstrated in eleven healthy subjects. The results indicate that the correlation-based method might be less precise in vivo but more accurate than the conventional method.
Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi
2011-01-01
AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient between AQCEL and conventional methods were 0.973 and 0.986 for the normal and affected sides at rest, respectively, and 0.977 and 0.984 for the normal and affected sides after ACZ loading, respectively. The quality of images reconstructed using the application software AQCEL were superior to that obtained using conventional method after ACZ loading, and high correlations were shown in quantity at rest and after ACZ loading. This software can be applied to clinical practice and is a useful tool for improvement of reproducibility and throughput.
Student Orientations to Independent Learning.
ERIC Educational Resources Information Center
Jones, Alice; Jones, Douglas
1996-01-01
A study investigated the relationship of 46 college students' preferred teaching method (conventional lecture versus independent study package) and their own approaches to study (surface, deep, achieving). Results indicated that while students preferred the conventional lecture method, preference did not correlate with their study approach and…
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
New method: calculation of magnification factor from an intracardiac marker.
Cha, S D; Incarvito, J; Maranhao, V
1983-01-01
In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.
[Nitrogen status diagnosis of rice by using a digital camera].
Jia, Liang-Liang; Fan, Ming-Sheng; Zhang, Fu-Suo; Chen, Xin-Ping; Lü, Shi-Hua; Sun, Yan-Ming
2009-08-01
In the present research, a field experiment with different N application rate was conducted to study the possibility of using visible band color analysis methods to monitor the N status of rice canopy. The Correlations of visible spectrum band color intensity between rice canopy image acquired from a digital camera and conventional nitrogen status diagnosis parameters of leaf SPAD chlorophyll meter readings, total N content, upland biomass and N uptake were studied. The results showed that the red color intensity (R), green color intensity (G) and normalized redness intensity (NRI) have significant inverse linear correlations with the conventional N diagnosis parameters of SPAD readings, total N content, upland biomass and total N uptake. The correlation coefficient values (r) were from -0.561 to -0.714 for red band (R), from -0.452 to -0.505 for green band (G), and from -0.541 to 0.817 for normalized redness intensity (NRI). But the normalized greenness intensity (NGI) showed a significant positive correlation with conventional N parameters and the correlation coefficient values (r) were from 0.505 to 0.559. Compared with SPAD readings, the normalized redness intensity (NRI), with a high r value of 0.541-0.780 with conventional N parameters, could better express the N status of rice. The digital image color analysis method showed the potential of being used in rice N status diagnosis in the future.
[Evaluation of Wits appraisal with superimposition method].
Xu, T; Ahn, J; Baumrind, S
1999-07-01
To compare the conventional Wits appraisal with superimposed Wits appraisal in evaluation of sagittal jaw relationship change between pre and post orthodontic treatment. The sample consists of 48-case pre and post treatment lateral head films. Computerized digitizing is used to get the cephalometric landmarks and measure conventional Wits value, superimposed Wits value and ANB angle. The correlation analysis among these three measures was done by SAS statistical package. The change of ANB angle has higher correlation with the change of superimposed Wits than that of the conventional Wits. The r-value is as high as 0.849 (P < 0.001). The superimposed Wits appraisal reflects the change of sagittal jaw relationship more objectively than the conventional one.
Hans-Erik Andersen; Stephen E. Reutebuch; Robert J. McGaughey
2006-01-01
Tree height is an important variable in forest inventory programs but is typically time-consuming and costly to measure in the field using conventional techniques. Airborne light detection and ranging (LIDAR) provides individual tree height measurements that are highly correlated with field-derived measurements, but the imprecision of conventional field techniques does...
Bioactive Compounds in Potato Tubers: Effects of Farming System, Cooking Method, and Flesh Color
Czerko, Zbigniew; Zarzyńska, Krystyna; Borowska-Komenda, Monika
2016-01-01
We investigated the effect of cultivation system (conventional or organic), cooking method, and flesh color on the contents of ascorbic acid (AA) and total phenolics (TPs), and on total antioxidant activity (Trolox equivalents, TE) in Solanum tuberosum (potato) tubers. The research material, consisting of 4 potato cultivars, was grown in experimental fields, using organic and conventional systems, at the experimental station in 2012 and 2013. The analysis showed that organically grown potatoes with creamy, light yellow, and yellow flesh had significantly higher TPs than did potatoes grown conventionally. Flesh color and cooking method also affected AA. The greatest losses of AA occurred in yellow-fleshed potatoes grown conventionally and cooked in the microwave; such losses were not observed in potatoes grown organically. A dry cooking method (baking in a microwave) increased the TP contents in potatoes by about 30%, regardless of the flesh color and the production system. TE was significantly higher in organically grown potatoes (raw and cooked in a steamer) than in conventionally grown potatoes. TE and AA contents showed a significant positive correlation, but only in potatoes from the organic system [R2 = 0.686]. By contrast, the positive correlation between TE and TPs was observed regardless of the production system. Therefore, we have identified the effects of farming system, cooking method, and flesh color on the contents of bioactive compounds in potato tubers. PMID:27139188
The Use of an Intra-Articular Depth Guide in the Measurement of Partial Thickness Rotator Cuff Tears
Carroll, Michael J.; More, Kristie D.; Sohmer, Stephen; Nelson, Atiba A.; Sciore, Paul; Boorman, Richard; Hollinshead, Robert; Lo, Ian K. Y.
2013-01-01
Purpose. The purpose of this study was to compare the accuracy of the conventional method for determining the percentage of partial thickness rotator cuff tears to a method using an intra-articular depth guide. The clinical utility of the intra-articular depth guide was also examined. Methods. Partial rotator cuff tears were created in cadaveric shoulders. Exposed footprint, total tendon thickness, and percentage of tendon thickness torn were determined using both techniques. The results from the conventional and intra-articular depth guide methods were correlated with the true anatomic measurements. Thirty-two patients were evaluated in the clinical study. Results. Estimates of total tendon thickness (r = 0.41, P = 0.31) or percentage of thickness tears (r = 0.67, P = 0.07) using the conventional method did not correlate well with true tendon thickness. Using the intra-articular depth guide, estimates of exposed footprint (r = 0.92, P = 0.001), total tendon thickness (r = 0.96, P = 0.0001), and percentage of tendon thickness torn (r = 0.88, P = 0.004) correlated with true anatomic measurements. Seven of 32 patients had their treatment plan altered based on the measurements made by the intra-articular depth guide. Conclusions. The intra-articular depth guide appeared to better correlate with true anatomic measurements. It may be useful during the evaluation and development of treatment plans for partial thickness articular surface rotator cuff tears. PMID:23533789
Manna, Debashree; Kesharwani, Manoj K; Sylvetsky, Nitai; Martin, Jan M L
2017-07-11
Benchmark ab initio energies for BEGDB and WATER27 data sets have been re-examined at the MP2 and CCSD(T) levels with both conventional and explicitly correlated (F12) approaches. The basis set convergence of both conventional and explicitly correlated methods has been investigated in detail, both with and without counterpoise corrections. For the MP2 and CCSD-MP2 contributions, rapid basis set convergence observed with explicitly correlated methods is compared to conventional methods. However, conventional, orbital-based calculations are preferred for the calculation of the (T) term, since it does not benefit from F12. CCSD(F12*) converges somewhat faster with the basis set than CCSD-F12b for the CCSD-MP2 term. The performance of various DFT methods is also evaluated for the BEGDB data set, and results show that Head-Gordon's ωB97X-V and ωB97M-V functionals outperform all other DFT functionals. Counterpoise-corrected DSD-PBEP86 and raw DSD-PBEPBE-NL also perform well and are close to MP2 results. In the WATER27 data set, the anionic (deprotonated) water clusters exhibit unacceptably slow basis set convergence with the regular cc-pVnZ-F12 basis sets, which have only diffuse s and p functions. To overcome this, we have constructed modified basis sets, denoted aug-cc-pVnZ-F12 or aVnZ-F12, which have been augmented with diffuse functions on the higher angular momenta. The calculated final dissociation energies of BEGDB and WATER27 data sets are available in the Supporting Information. Our best calculated dissociation energies can be reproduced through n-body expansion, provided one pushes to the basis set and electron correlation limit for the two-body term; for the three-body term, post-MP2 contributions (particularly CCSD-MP2) are important for capturing the three-body dispersion effects. Terms beyond four-body can be adequately captured at the MP2-F12 level.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST
Rock, Michael J.; Makholm, Linda; Eickhoff, Jens
2015-01-01
Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724
Speckle-field propagation in 'frozen' turbulence: brightness function approach
NASA Astrophysics Data System (ADS)
Dudorov, Vadim V.; Vorontsov, Mikhail A.; Kolosov, Valeriy V.
2006-08-01
Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.
Speckle-field propagation in 'frozen' turbulence: brightness function approach.
Dudorov, Vadim V; Vorontsov, Mikhail A; Kolosov, Valeriy V
2006-08-01
Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.
Development of ocular viscosity characterization method.
Shu-Hao Lu; Guo-Zhen Chen; Leung, Stanley Y Y; Lam, David C C
2016-08-01
Glaucoma is the second leading cause for blindness. Irreversible and progressive optic nerve damage results when the intraocular pressure (IOP) exceeds 21 mmHg. The elevated IOP is attributed to blocked fluid drainage from the eye. Methods to measure the IOP are widely available, but methods to measure the viscous response to blocked drainage has yet been developed. An indentation method to characterize the ocular flow is developed in this study. Analysis of the load-relaxation data from indentation tests on drainage-controlled porcine eyes showed that the blocked drainage is correlated with increases in ocular viscosity. Successful correlation of the ocular viscosity with drainage suggests that ocular viscosity maybe further developed as a new diagnostic parameter for assessment of normal tension glaucoma where nerve damage occurs without noticeable IOP elevation; and as a diagnostic parameter complimentary to conventional IOP in conventional diagnosis.
Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping
2018-01-01
Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.
A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network
Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing
2018-01-01
Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gender-Specific Correlates of Complementary and Alternative Medicine Use for Knee Osteoarthritis
Yang, Shibing; Eaton, Charles B.; McAlindon, Timothy; Lapane, Kate L.
2012-01-01
Abstract Background Knee osteoarthritis (OA) increases healthcare use and cost. Women have higher pain and lower quality of life measures compared to men even after accounting for differences in age, body mass index (BMI), and radiographic OA severity. Our objective was to describe gender-specific correlates of complementary and alternative medicine (CAM) use among persons with radiographically confirmed knee OA. Methods Using data from the Osteoarthritis Initiative, 2,679 women and men with radiographic tibiofemoral OA in at least one knee were identified. Treatment approaches were classified as current CAM therapy (alternative medical systems, mind-body interventions, manipulation and body-based methods, energy therapies, and three types of biologically based therapies) or conventional medication use (over-the-counter or prescription). Gender-specific multivariable logistic regression models identified sociodemographic and clinical/functional correlates of CAM use. Results CAM use, either alone (23.9% women, 21.9% men) or with conventional medications (27.3% women, 19.0% men), was common. Glucosamine use (27.2% women, 28.2% men) and chondroitin sulfate use (24.8% women; 25.7% men) did not differ by gender. Compared to men, women were more likely to report use of mind-body interventions (14.1% vs. 5.7%), topical agents (16.1% vs. 9.5%), and concurrent CAM strategies (18.0% vs. 9.9%). Higher quality of life measures and physical function indices in women were inversely associated with any therapy, and higher pain scores were positively associated with conventional medication use. History of hip replacement was a strong correlate of conventional medication use in women but not in men. Conclusions Women were more likely than men to use CAM alone or concomitantly with conventional medications. PMID:22946630
Clustering Coefficients for Correlation Networks.
Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu
2018-01-01
Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly correlated with and therefore may be confounded by the node's connectivity. The proposed methods are expected to help us to understand clustering and lack thereof in correlational brain networks, such as those derived from functional time series and across-participant correlation in neuroanatomical properties.
Clustering Coefficients for Correlation Networks
Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu
2018-01-01
Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly correlated with and therefore may be confounded by the node's connectivity. The proposed methods are expected to help us to understand clustering and lack thereof in correlational brain networks, such as those derived from functional time series and across-participant correlation in neuroanatomical properties. PMID:29599714
Simultaneous 3D MR elastography of the in vivo mouse brain
NASA Astrophysics Data System (ADS)
Kearney, Steven P.; Majumdar, Shreyan; Royston, Thomas J.; Klatt, Dieter
2017-10-01
The feasibility of sample interval modulation (SLIM) magnetic resonance elastography (MRE) for the in vivo mouse brain is assessed, and an alternative SLIM-MRE encoding method is introduced. In SLIM-MRE, the phase accumulation for each motion direction is encoded simultaneously by varying either the start time of the motion encoding gradient (MEG), SLIM-phase constant (SLIM-PC), or the initial phase of the MEG, SLIM-phase varying (SLIM-PV). SLIM-PC provides gradient moment nulling, but the mutual gradient shift necessitates increased echo time (TE). SLIM-PV requires no increased TE, but exhibits non-uniform flow compensation. Comparison was to conventional MRE using six C57BL/6 mice. For SLIM-PC, the Spearman’s rank correlation to conventional MRE for the shear storage and loss modulus images were 80% and 76%, respectively, and likewise for SLIM-PV, 73% and 69%, respectively. The results of the Wilcoxon rank sum test showed that there were no statistically significant differences between the spatially averaged shear moduli derived from conventional-MRE, SLIM-PC, and SLIM-PV acquisitions. Both SLIM approaches were comparable to conventional MRE scans with Spearman’s rank correlation of 69%-80% and with 3 times reduction in scan time. The SLIM-PC method had the best correlation, and SLIM-PV may be a useful tool in experimental conditions, where both measurement time and T2 relaxation is critical.
Simultaneous 3D MR elastography of the in vivo mouse brain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kearney, Steven P.; Majumdar, Shreyan; Royston, Thomas J.
The feasibility of sample interval modulation (SLIM) magnetic resonance elastography (MRE) for the in vivo mouse brain is assessed, and an alternative SLIM-MRE encoding method is introduced. In SLIMMRE, the phase accumulation for each motion direction is encoded simultaneously by varying either the start time of the motion encoding gradient (MEG), SLIM-phase constant (SLIM-PC), or the initial phase of the MEG, SLIM-phase varying (SLIM-PV). SLIM-PC provides gradient moment nulling, but the mutual gradient shift necessitates increased echo time (TE). SLIM-PV requires no increased TE, but exhibits nonuniform flow compensation. Comparison was to conventional MRE using six C57BL/6 mice. For SLIMPC,more » the Spearman’s rank correlation to conventional MRE for the shear storage and loss modulus images were 80% and 76%, respectively, and likewise for SLIM-PV, 73% and 69%, respectively. The results of the Wilcoxon rank sum test showed that there were no statistically significant differences between the spatially averaged shear moduli derived from conventional-MRE, SLIM-PC, and SLIM-PV acquisitions. Both SLIM approaches were comparable to conventional MRE scans with Spearman’s rank correlation of 69%-80% and with 3 times reduction in scan time. As a result, the SLIM-PC method had the best correlation, and SLIM-PV may be a useful tool in experimental conditions, where both measurement time and T2 relaxation is critical.« less
Simultaneous 3D MR elastography of the in vivo mouse brain
Kearney, Steven P.; Majumdar, Shreyan; Royston, Thomas J.; ...
2017-09-15
The feasibility of sample interval modulation (SLIM) magnetic resonance elastography (MRE) for the in vivo mouse brain is assessed, and an alternative SLIM-MRE encoding method is introduced. In SLIMMRE, the phase accumulation for each motion direction is encoded simultaneously by varying either the start time of the motion encoding gradient (MEG), SLIM-phase constant (SLIM-PC), or the initial phase of the MEG, SLIM-phase varying (SLIM-PV). SLIM-PC provides gradient moment nulling, but the mutual gradient shift necessitates increased echo time (TE). SLIM-PV requires no increased TE, but exhibits nonuniform flow compensation. Comparison was to conventional MRE using six C57BL/6 mice. For SLIMPC,more » the Spearman’s rank correlation to conventional MRE for the shear storage and loss modulus images were 80% and 76%, respectively, and likewise for SLIM-PV, 73% and 69%, respectively. The results of the Wilcoxon rank sum test showed that there were no statistically significant differences between the spatially averaged shear moduli derived from conventional-MRE, SLIM-PC, and SLIM-PV acquisitions. Both SLIM approaches were comparable to conventional MRE scans with Spearman’s rank correlation of 69%-80% and with 3 times reduction in scan time. As a result, the SLIM-PC method had the best correlation, and SLIM-PV may be a useful tool in experimental conditions, where both measurement time and T2 relaxation is critical.« less
The Role of Conventional Ultrasound in the Assessment of Thyroid Nodule in Erbil City
ERIC Educational Resources Information Center
Musa, Sarbast Ismail; Hanary, Salah Mohammad
2016-01-01
Background: Nodular thyroid disease is relatively common although thyroid cancer is rare. The aim of this study is to evaluate the advantage and reliability of conventional ultrasound in correlating sonographic characteristics of thyroid nodule with US-FNAC guided result as a diagnostic aid in thyroid nodule. Method: 111 patients were examined by…
Feasibility study consisting of a review of contour generation methods from stereograms
NASA Technical Reports Server (NTRS)
Kim, C. J.; Wyant, J. C.
1980-01-01
A review of techniques for obtaining contour information from stereo pairs is given. Photogrammetric principles including a description of stereoscopic vision are presented. The use of conventional contour generation methods, such as the photogrammetric plotting technique, electronic correlator, and digital correlator are described. Coherent optical techniques for contour generation are discussed and compared to the electronic correlator. The optical techniques are divided into two categories: (1) image plane operation and (2) frequency plane operation. The description of image plane correlators are further divided into three categories: (1) image to image correlator, (2) interferometric correlator, and (3) positive negative transparencies. The frequency plane correlators are divided into two categories: (1) correlation of Fourier transforms, and (2) filtering techniques.
Optical calculation of correlation filters for a robotic vision system
NASA Technical Reports Server (NTRS)
Knopp, Jerome
1989-01-01
A method is presented for designing optical correlation filters based on measuring three intensity patterns: the Fourier transform of a filter object, a reference wave and the interference pattern produced by the sum of the object transform and the reference. The method can produce a filter that is well matched to both the object, its transforming optical system and the spatial light modulator used in the correlator input plane. A computer simulation was presented to demonstrate the approach for the special case of a conventional binary phase-only filter. The simulation produced a workable filter with a sharp correlation peak.
NASA Astrophysics Data System (ADS)
Ja'fari, Ahmad; Hamidzadeh Moghadam, Rasoul
2012-10-01
Routine core analysis provides useful information for petrophysical study of the hydrocarbon reservoirs. Effective porosity and fluid conductivity (permeability) could be obtained from core analysis in laboratory. Coring hydrocarbon bearing intervals and analysis of obtained cores in laboratory is expensive and time consuming. In this study an improved method to make a quantitative correlation between porosity and permeability obtained from core and conventional well log data by integration of different artificial intelligent systems is proposed. The proposed method combines the results of adaptive neuro-fuzzy inference system (ANFIS) and neural network (NN) algorithms for overall estimation of core data from conventional well log data. These methods multiply the output of each algorithm with a weight factor. Simple averaging and weighted averaging were used for determining the weight factors. In the weighted averaging method the genetic algorithm (GA) is used to determine the weight factors. The overall algorithm was applied in one of SW Iran’s oil fields with two cored wells. One-third of all data were used as the test dataset and the rest of them were used for training the networks. Results show that the output of the GA averaging method provided the best mean square error and also the best correlation coefficient with real core data.
A new method of sweat testing: the CF Quantum®sweat test.
Rock, Michael J; Makholm, Linda; Eickhoff, Jens
2014-09-01
Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
Sow, Doudou; Dieng, Yémou; Haouchine, Djamal; Niang, Khadim; Niang, Thiane; Sylla, Khadime; Tine, Roger Clément; Ndiaye, Magatte; Ndiaye, Jean Louis; Faye, Babacar; Faye, Omar; Gaye, Oumar; Dieng, Thérèse; Izri, Arezki
2017-09-01
In the context of controlling intestinal parasites, accurate diagnosis is essential. Our objective was to evaluate the performance of new diagnostic kits compared to conventional microscopic methods in identifying intestinal parasites. Faeces collected in rural area in Senegal were subjected to several detection techniques. Thus, the sensitivity, specificity, positive and negative predictive values of new diagnostic techniques were compared to conventional merthiolate-iodine-formalin, conventional Bailenger and modified Ritchie. Furthermore, the kappa coefficient was calculated to evaluate the correlation between the new kit and those of modified Ritchie. Out of the 117 patients examined, 102 presented with a parasite, or prevalence of 87.1%. The Fumouze techniques proved to be as effective as the conventional methods in detecting flagellates and helminths with sensitivities ranging from 97 to 100%. However, conventional techniques were slightly more sensitive in identifying Endolimax nana and Blastocystis hominis . The correlation was nearly perfect (k = 0.83 and 1), respectively between Bailenger Fumouze, Iodesine Fumouze and modified Ritchie in identifying helminths while it was just acceptable (k = 0.27 and 0.28) in identifying B. hominis . The modified Ritchie technique routinely used in our laboratory remains a good diagnostic tool. However, the use of kit techniques was interesting when reading the pellet after concentration and the Colour KOP staining was a considerable contribution to the diagnosis of the vegetative forms. Therefore, it would be interesting to determine the cost of a stool test using Fumouze kit techniques to provide the most cost effective way.
Comparison between Bactec Peds Plus F Broth and Conventional Medium for Vitreous Culture.
Tabatabaei, Seyed Ali; Tabatabaei, Seyed Mehdi; Soleimani, Mohammad; Hejrati, Bahman; Mirshahi, Ahmad; Khadabandeh, Alireza; Ahmadraji, Aliasghar; Valipour, Niloofar
2018-05-10
To evaluate the yield of Bactec Peds Plus F broth for vitreous sample culture in cases with infectious endophthalmitis in comparison to conventional medium. Consecutive cases of clinically suspected endophthalmitis were prospectively enrolled in this study. Cultures of the vitreous sample were performed in both Bactec Peds Plus F broth and conventional mediums. Forty eyes of 40 patients who were clinically suspected of infectious endophthalmitis with different etiologies were enrolled in this study. The positive culture yield was 50% and 35% in Bactec Peds Plus F broth and conventional mediums, respectively (p = 0.07). The result of Bactec group was not significantly different among patients who had a history of intravitreal antibiotic injection (p > 0.05) (Table 2). However, results of the conventional method were significantly negative in the previous intravitreal antibiotic injection group (p = 0.02). There was no correlation between the methods of vitreous sampling in both culture methods. Although the difference between two culture methods was not statistically significant in this study, Bactec Peds Plus F broth showed higher positive culture yield in patients with a history of intravitreal antibiotic injection.
Pickup, William; Bremer, Phil; Peng, Mei
2018-03-01
The extensive time and cost associated with conventional sensory profiling methods has spurred sensory researchers to develop rapid method alternatives, such as Napping® with Ultra-Flash Profiling (UFP). Napping®-UFP generates sensory maps by requiring untrained panellists to separate samples based on perceived sensory similarities. Evaluations of this method have been restrained to manufactured/formulated food models, and predominantly structured on comparisons against the conventional descriptive method. The present study aims to extend the validation of Napping®-UFP (N = 72) to natural biological products; and to evaluate this method against Descriptive Analysis (DA; N = 8) with physiochemical measurements as an additional evaluative criterion. The results revealed that sample configurations generated by DA and Napping®-UFP were not significantly correlated (RV = 0.425, P = 0.077); however, they were both correlated with the product map generated based on the instrumental measures (P < 0.05). The finding also noted that sample characterisations from DA and Napping®-UFP were driven by different sensory attributes, indicating potential structural differences between these two methods in configuring samples. Overall, these findings lent support for the extended use of Napping®-UFP for evaluations of natural biological products. Although DA was shown to be a better method for establishing sensory-instrumental relationships, Napping®-UFP exhibited strengths in generating informative sample configurations based on holistic perception of products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Kim, Sung Jae; Kim, Sung Hwan; Kim, Young Hwan; Chun, Yong Min
2015-01-01
The authors have observed a failure to achieve secure fixation in elderly patients when inserting a half-pin at the anteromedial surface of the tibia. The purpose of this study was to compare two methods for inserting a half-pin at tibia diaphysis in elderly patients. Twenty cadaveric tibias were divided into Group C or V. A half-pin was inserted into the tibias of Group C via the conventional method, from the anteromedial surface to the interosseous border of the tibia diaphysis, and into the tibias of Group V via the vertical method, from the anterior border to the posterior surface at the same level. The maximum insertion torque was measured during the bicortical insertion with a torque driver. The thickness of the cortex was measured by micro-computed tomography. The relationship between the thickness of the cortex engaged and the insertion torque was investigated. The maximum insertion torque and the thickness of the cortex were significantly higher in Group V than Group C. Both groups exhibited a statistically significant linear correlation between torque and thickness by Spearman's rank correlation analysis. Half-pins inserted by the vertical method achieved purchase of more cortex than those inserted by the conventional method. Considering that cortical thickness and insertion torque in Group V were significantly greater than those in Group C, we suggest that the vertical method of half-pin insertion may be an alternative to the conventional method in elderly patients.
Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy
NASA Astrophysics Data System (ADS)
Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi
2008-05-01
We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.
Ground-Cover Measurements: Assessing Correlation Among Aerial and Ground-Based Methods
NASA Astrophysics Data System (ADS)
Booth, D. Terrance; Cox, Samuel E.; Meikle, Tim; Zuuring, Hans R.
2008-12-01
Wyoming’s Green Mountain Common Allotment is public land providing livestock forage, wildlife habitat, and unfenced solitude, amid other ecological services. It is also the center of ongoing debate over USDI Bureau of Land Management’s (BLM) adjudication of land uses. Monitoring resource use is a BLM responsibility, but conventional monitoring is inadequate for the vast areas encompassed in this and other public-land units. New monitoring methods are needed that will reduce monitoring costs. An understanding of data-set relationships among old and new methods is also needed. This study compared two conventional methods with two remote sensing methods using images captured from two meters and 100 meters above ground level from a camera stand (a ground, image-based method) and a light airplane (an aerial, image-based method). Image analysis used SamplePoint or VegMeasure software. Aerial methods allowed for increased sampling intensity at low cost relative to the time and travel required by ground methods. Costs to acquire the aerial imagery and measure ground cover on 162 aerial samples representing 9000 ha were less than 3000. The four highest correlations among data sets for bare ground—the ground-cover characteristic yielding the highest correlations (r)—ranged from 0.76 to 0.85 and included ground with ground, ground with aerial, and aerial with aerial data-set associations. We conclude that our aerial surveys are a cost-effective monitoring method, that ground with aerial data-set correlations can be equal to, or greater than those among ground-based data sets, and that bare ground should continue to be investigated and tested for use as a key indicator of rangeland health.
Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.
Ji, L; Danuser, G
2005-12-01
We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.
Space shuttle nonmetallic materials age life prediction
NASA Technical Reports Server (NTRS)
Mendenhall, G. D.; Hassell, J. A.; Nathan, R. A.
1975-01-01
The chemiluminescence from samples of polybutadiene, Viton, Teflon, Silicone, PL 731 Adhesive, and SP 296 Boron-Epoxy composite was measured at temperatures from 25 to 150 C. Excellent correlations were obtained between chemiluminescence and temperature. These correlations serve to validate accelerated aging tests (at elevated temperatures) designed to predict service life at lower temperatures. In most cases, smooth or linear correlations were obtained between chemiluminescence and physical properties of purified polymer gums, including the tensile strength, viscosity, and loss tangent. The latter is a complex function of certain polymer properties. Data were obtained with far greater ease by the chemiluminescence technique than by the conventional methods of study. The chemiluminescence from the Teflon (Halon) samples was discovered to arise from trace amounts of impurities, which were undetectable by conventional, destructive analysis of the sample.
Hall, Val; O’Neill, G. L.; Magee, J. T.; Duerden, B. I.
1999-01-01
Identification of Actinomyces spp. by conventional phenotypic methods is notoriously difficult and unreliable. Recently, the application of chemotaxonomic and molecular methods has clarified the taxonomy of the group and has led to the recognition of several new species. A practical and discriminatory identification method is now needed for routine identification of clinical isolates. Amplified 16S ribosomal DNA restriction analysis (ARDRA) was applied to reference strains (n = 27) and clinical isolates (n = 36) of Actinomyces spp. and other gram-positive rods. Clinical strains were identified initially to the species level by conventional biochemical tests. However, given the low degree of confidence in conventional methods, the findings obtained by ARDRA were also compared with those obtained by pyrolysis-mass spectrometry. The ARDRA profiles generated by the combination of HaeIII and HpaII endonuclease digestion differentiated all reference strains to the species or subspecies level. The profiles correlated well with the findings obtained by pyrolysis-mass spectrometry and by conventional tests and enabled the identification of 31 of 36 clinical isolates to the species level. ARDRA was shown to be a simple, rapid, cost-effective, and highly discriminatory method for routine identification of Actinomyces spp. of clinical origin. PMID:10364594
Density scaling for multiplets
NASA Astrophysics Data System (ADS)
Nagy, Á.
2011-02-01
Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Liquid-based cytology for primary cervical cancer screening: a multi-centre study
Monsonego, J; Autillo-Touati, A; Bergeron, C; Dachez, R; Liaras, J; Saurel, J; Zerat, L; Chatelain, P; Mottot, C
2001-01-01
The aim of this six-centre, split-sample study was to compare ThinPrep fluid-based cytology to the conventional Papanicolaou smear. Six cytopathology laboratories and 35 gynaecologists participated. 5428 patients met the inclusion criteria (age > 18 years old, intact cervix, informed consent). Each cervical sample was used first to prepare a conventional Pap smear, then the sampling device was rinsed into a PreservCyt vial, and a ThinPrep slide was made. Screening of slide pairs was blinded (n = 5428). All non-negative concordant cases (n = 101), all non-concordant cases (n = 206), and a 5% random sample of concordant negative cases (n = 272) underwent review by one independent pathologist then by the panel of 6 investigators. Initial (blinded) screening results for ThinPrep and conventional smears were correlated. Initial diagnoses were correlated with consensus cytological diagnoses. Differences in disease detection were evaluated using McNemar's test. On initial screening, 29% more ASCUS cases and 39% more low-grade squamous intraepithelial lesions (LSIL) and more severe lesions (LSIL+) were detected on the ThinPrep slides than on the conventional smears (P = 0.001), including 50% more LSIL and 18% more high-grade SIL (HSIL). The ASCUS:SIL ratio was lower for the ThinPrep method (115:132 = 0.87:1) than for the conventional smear method (89:94 = 0.95:1). The same trend was observed for the ASCUS/AGUS:LSIL ratio. Independent and consensus review confirmed 145 LSIL+ diagnoses; of these, 18% more had been detected initially on the ThinPrep slides than on the conventional smears (P = 0.041). The ThinPrep Pap Test is more accurate than the conventional Pap test and has the potential to optimize the effectiveness of primary cervical cancer screening. © 2001 Cancer Research Campaign http://www.bjcancer.com PMID:11161401
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Bardakci, Hasmet; Altıntaş, Garip; Çiçek, Omer Faruk; Kervan, Umit; Yilmaz, Sevinc; Kaplan, Sadi; Birincioglu, Cemal Levent
2013-05-01
To compare the international normalised ratio (INR) value of patients evaluated using the CoaguChek XS versus conventional laboratory methods, in the period after open-heart surgery for mechanical valve replacement until a therapeutic range is achieved using vitamin K antagonists (VKA) together with low molecular weight heparin (LMWH). One hundred and five patients undergoing open-heart surgery for mechanical valve replacement were enrolled. Blood samples were collected from patients before surgery, and on the second and fifth postoperative days, simultaneously for both the point of care device and conventional laboratory techniques. Patients were administered VKA together with LMWH at therapeutic doses (enoxaparin 100 IU/kg twice daily) subcutaneously, until an effective range was achieved on approximately the fifth day after surgery. The mean INR values using the CoaguChek XS preoperatively and on the second and fifth days postoperatively were 1.20 (SD ± 0.09), 1.82 (SD ± 0.45), and 2.55 (SD ± 0.55), respectively. Corresponding results obtained using conventional laboratory techniques were 1.18 (SD ± 0.1), 1.81 (SD ± 0.43), and 2.51 (SD ± 0.58). The correlation coefficient was r = 0.77 preoperatively, r = 0.981 on postoperative day 2, and r = 0.983 on postoperative day 5. Results using the CoaguChek XS Handheld Coagulation Analyzer correlated strongly with conventional laboratory methods, in the bridging period between open-heart surgery for mechanical valve replacement and the achievement of a therapeutic range on warfarin and LMWH. © 2013 Wiley Periodicals, Inc.
Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette
2017-12-01
J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.
Maciejewska-Paszek, Izabela; Grochowska-Niedworok, Elżbieta; Siwiec, Andrzej; Gruenpeter, Anna; Dul, Lechosław; Irzyniec, Tomasz
2017-04-01
Objective To assess possible changes in leptin and ghrelin secretion due to etanercept in juvenile idiopathic arthritis (JIA). Methods 50 patients with JIA and 16 age-matched controls were enrolled into this prospective, cross-sectional study. Serum leptin, total and acyl ghrelin were measured in addition to white blood cell (WBC) and lymphocyte counts. Results 25 patients received etanercept and 25 conventional therapies (including methotrexate) for JIA. There was no difference between treatment and control groups in leptin or ghrelin levels and no evidence of a relationship between leptin and ghrelin in patients with JIA. In all children with JIA there was a correlation between leptin and body mass index (BMI). However, compared with children in the conventional treatment group, children in the etanercept group showed a positive correlation between total ghrelin and BMI and those with a low BMI showed a negative correlation between acyl ghrelin and BMI. Conclusion No differences in leptin and ghrelin concentrations were found when patients with JIA and controls were compared or when patients who received etanercept were compared with those who received conventional treatment for JIA.
Maciejewska-Paszek, Izabela; Grochowska-Niedworok, Elżbieta; Siwiec, Andrzej; Gruenpeter, Anna; Dul, Lechosław
2017-01-01
Objective To assess possible changes in leptin and ghrelin secretion due to etanercept in juvenile idiopathic arthritis (JIA). Methods 50 patients with JIA and 16 age-matched controls were enrolled into this prospective, cross-sectional study. Serum leptin, total and acyl ghrelin were measured in addition to white blood cell (WBC) and lymphocyte counts. Results 25 patients received etanercept and 25 conventional therapies (including methotrexate) for JIA. There was no difference between treatment and control groups in leptin or ghrelin levels and no evidence of a relationship between leptin and ghrelin in patients with JIA. In all children with JIA there was a correlation between leptin and body mass index (BMI). However, compared with children in the conventional treatment group, children in the etanercept group showed a positive correlation between total ghrelin and BMI and those with a low BMI showed a negative correlation between acyl ghrelin and BMI. Conclusion No differences in leptin and ghrelin concentrations were found when patients with JIA and controls were compared or when patients who received etanercept were compared with those who received conventional treatment for JIA. PMID:28415953
Chiu, Chih-Hao; Lei, Kin Fong; Yeh, Wen-Ling; Chen, Poyu; Chan, Yi-Sheng; Hsu, Kuo-Yao; Chen, Alvin Chao-Yu
2017-10-16
Local injections of anesthetics, NSAIDs, and corticosteroids for tendinopathies are empirically used. They are believed to have some cytotoxicity toward tenocytes. The maximal efficacy dosages of local injections should be determined. A commercial 2D microfluidic xCELLigence system had been developed to detect real-time cellular proliferation and their responses to different stimuli and had been used in several biomedical applications. The purpose of this study is to determine if human tenocytes can successfully proliferate inside xCELLigence system and the result has high correlation with conventional cell culture methods in the same condition. First passage of human tenocytes was seeded in xCELLigence and conventional 24-well plates. Ketorolac tromethamine, bupivacaine, methylprednisolone, and betamethasone with different concentrations (100, 50, and 10% diluted of clinical usage) were exposed in both systems. Gene expression of type I collagen, type III collagen, tenascin-C, decorin, and scleraxis were compared between two systems. Human tenocytes could proliferate both in xCELLigence and conventional cell culture systems. Cytotoxicity of each drug revealed dose-dependency when exposed to tenocytes in both systems. Significance was found between groups. All the four drugs had comparable cytotoxicity in their 100% concentration. When 50% concentration was used, betamethasone had a relatively decreased cytotoxicity among them in xCELLigence but not in conventional culture. When 10% concentration was used, betamethasone had the least cytotoxicity. Strong and positive correlation was found between cell index of xCELLigence and result of WST-1 assay (Pearson's correlation [r] = 0.914). Positive correlation of gene expression between tenocytes in xCELLigence and conventional culture was also observed. Type I collagen: [r] = 0.823; type III collagen: [r] = 0.899; tenascin-C: [r] = 0.917; decorin: [r] = 0.874; and scleraxis: [r] = 0.965. Human tenocytes could proliferate inside xCELLigence system. These responses varied when tenocytes were exposed to different concentrations of ketorolac tromethamine, bupivacaine, methylprednisolone, and betamethasone. The result of cell proliferation and gene expression of tenocytes in both xCELLigence and conventional culture system is strongly correlated. xCELLigence culture system may replace conventional cell culture, which made real-time tenocyte proliferation monitoring possible.
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Fan, Yiren; Liu, Jianyu; Zhang, Li; Han, Yujiao; Xing, Donghui
2017-10-01
Permeability is an important parameter in formation evaluation since it controls the fluid transportation of porous rocks. However, it is challengeable to compute the permeability of bioclastic limestone reservoirs by conventional methods linking petrophysical and geophysical data, due to the complex pore distributions. A new method is presented to estimate the permeability based on laboratory and downhole nuclear magnetic resonance (NMR) measurements. We divide the pore space into four intervals by the inflection points between the pore radius and the transversal relaxation time. Relationships between permeability and percentages of different pore intervals are investigated to investigate influential factors on the fluid transportation. Furthermore, an empirical model, which takes into account of the pore size distributions, is presented to compute the permeability. 212 core samples in our case show that the accuracy of permeability calculation is improved from 0.542 (SDR model), 0.507 (TIM model), 0.455 (conventional porosity-permeability regressions) to 0.803. To enhance the precision of downhole application of the new model, we developed a fluid correction algorithm to construct the water spectrum of in-situ NMR data, aiming to eliminate the influence of oil on the magnetization. The result reveals that permeability is positively correlated with percentages of mega-pores and macro-pores, but negatively correlated with the percentage of micro-pores. Poor correlation is observed between permeability and the percentage of meso-pores. NMR magnetizations and T2 spectrums after the fluid correction agree well with laboratory results for samples saturated with water. Field application indicates that the improved method provides better performance than conventional models such as Schlumberger-Doll Research equation, Timur-Coates equation, and porosity-permeability regressions.
Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamiya, Muneaki; Hirata, So; Valiev, Marat
2008-02-19
Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less
Speeding up local correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kats, Daniel
2014-12-28
We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.
NASA Astrophysics Data System (ADS)
Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.
2014-08-01
We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.
Meyer, Georg F; Spray, Amy; Fairlie, Jo E; Uomini, Natalie T
2014-01-01
Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.
MRI Volume Fusion Based on 3D Shearlet Decompositions.
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
Prevalence and correlates of e-cigarette perceptions and trial among Mexican adolescents
Thrasher, James F.; Abad-Vivero, Erika N.; Barrientos-Gutíerrez, Inti; Pérez-Hernández, Rosaura; Reynales-Shigematsu, Luz Miriam; Mejía, Raúl; Arillo-Santillán, Edna; Hernández-Ávila, Mauricio; Sargent, James D.
2016-01-01
PURPOSE Assess the prevalence and correlates of e-cigarette perceptions and trial among adolescents in Mexico, where e-cigarettes are banned. METHODS Cross-sectional data were collected in 2015 from a representative sample of middle school students (n=10,146). Prevalence of e-cigarette awareness, relative harm, and trial were estimated, adjusting for sampling weights and school-level clustering. Multilevel logistic regression models adjusted for school-level clustering to assess correlates of e-cigarette awareness and trial. Finally, students who had tried only e-cigarettes were compared with students who had tried: 1) conventional cigarettes only; 2) both e-cigarettes and conventional cigarettes (dual triers); 3) neither cigarette type (never triers). RESULTS 51% of students had heard about e-cigarettes, 19% believed e-cigarettes were less harmful than conventional cigarettes, and 10% had tried them. Independent correlates of e-cigarette awareness and trial included established risk factors for smoking, as well as technophilia (i.e., use of more media technologies) and greater Internet tobacco advertising exposure. Exclusive e-cigarette triers (4%) had significantly higher technophilia, bedroom Internet access, and Internet tobacco advertising exposure compared to conventional cigarette triers (19%) and never triers (71%), but not compared to dual triers (6%), even though dual triers had significantly stronger conventional cigarette risk factors. CONCLUSIONS This study suggests that adolescent e-cigarette awareness and use is high in Mexico, in spite of its e-cigarette ban. A significant number of medium-risk youth have tried e-cigarettes only, suggesting that e-cigarettes could lead to more intensive substance use. Strategies to reduce e-cigarette use should consider reducing exposures to Internet marketing. PMID:26903433
Method of measuring interface area of activated carbons in condensed phase
NASA Astrophysics Data System (ADS)
Dmitriyev, D. S.; Agafonov, D. V.; Kiseleva, E. A.; Mikryukova, M. A.
2018-01-01
In this work, we investigated the correlation between the heat of wetting of super-capacitor electrode material (activated carbon) with condensed phases (electrolytes based on homologous series of phosphoric acid esters) and the capacity of the supercapacitor. The surface area of the electrode-electrolyte interface was calculated according to the obtained correlations using the conventional formula for calculating the capacitance of a capacitor.
Seok, Junhee; Seon Kang, Yeong
2015-01-01
Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461
Fujiwara, Yasuhiro; Maruyama, Hirotoshi; Toyomaru, Kanako; Nishizaka, Yuri; Fukamatsu, Masahiro
2018-06-01
Magnetic resonance imaging (MRI) is widely used to detect carotid atherosclerotic plaques. Although it is important to evaluate vulnerable carotid plaques containing lipids and intra-plaque hemorrhages (IPHs) using T 1 -weighted images, the image contrast changes depending on the imaging settings. Moreover, to distinguish between a thrombus and a hemorrhage, it is useful to evaluate the iron content of the plaque using both T 1 -weighted and T 2 *-weighted images. Therefore, a quantitative evaluation of carotid atherosclerotic plaques using T 1 and T 2 * values may be necessary for the accurate evaluation of plaque components. The purpose of this study was to determine whether the multi-echo phase-sensitive inversion recovery (mPSIR) sequence can improve T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of an IPH. T 1 and T 2 * values measured using mPSIR were compared to values from conventional methods in phantom and in vivo studies. In the phantom study, the T 1 and T 2 * values estimated using mPSIR were linearly correlated with those of conventional methods. In the in vivo study, mPSIR demonstrated higher T 1 contrast between the IPH phantom and sternocleidomastoid muscle than the conventional method. Moreover, the T 1 and T 2 * values of the blood vessel wall and sternocleidomastoid muscle estimated using mPSIR were correlated with values measured by conventional methods and with values reported previously. The mPSIR sequence improved T 1 contrast while simultaneously providing accurate T 1 and T 2 * values of the neck region. Although further study is required to evaluate the clinical utility, mPSIR may improve carotid atherosclerotic plaque detection and provide detailed information about plaque components.
Visualizing Similarity of Appearance by Arrangement of Cards
Nakatsuji, Nao; Ihara, Hisayasu; Seno, Takeharu; Ito, Hiroshi
2016-01-01
This study proposes a novel method to extract the configuration of the psychological space by directly measuring subjects' similarity rating without computational work. Although multidimensional scaling (MDS) is well-known as a conventional method for extracting the psychological space, the method requires many pairwise evaluations. The times taken for evaluations increase in proportion to the square of the number of objects in MDS. The proposed method asks subjects to arrange cards on a poster sheet according to the degree of similarity of the objects. To compare the performance of the proposed method with the conventional one, we developed similarity maps of typefaces through the proposed method and through non-metric MDS. We calculated the trace correlation coefficient among all combinations of the configuration for both methods to evaluate the degree of similarity in the obtained configurations. The threshold value of trace correlation coefficient for statistically discriminating similar configuration was decided based on random data. The ratio of the trace correlation coefficient exceeding the threshold value was 62.0% so that the configurations of the typefaces obtained by the proposed method closely resembled those obtained by non-metric MDS. The required duration for the proposed method was approximately one third of the non-metric MDS's duration. In addition, all distances between objects in all the data for both methods were calculated. The frequency for the short distance in the proposed method was lower than that of the non-metric MDS so that a relatively small difference was likely to be emphasized among objects in the configuration by the proposed method. The card arrangement method we here propose, thus serves as a easier and time-saving tool to obtain psychological structures in the fields related to similarity of appearance. PMID:27242611
Method for measuring radial impurity emission profiles using correlations of line integrated signals
NASA Astrophysics Data System (ADS)
Kuldkepp, M.; Brunsell, P. R.; Drake, J.; Menmuir, S.; Rachlew, E.
2006-04-01
A method of determining radial impurity emission profiles is outlined. The method uses correlations between line integrated signals and is based on the assumption of cylindrically symmetric fluctuations. Measurements at the reversed field pinch EXTRAP T2R show that emission from impurities expected to be close to the edge is clearly different in raw as well as analyzed data to impurities expected to be more central. Best fitting of experimental data to simulated correlation coefficients yields emission profiles that are remarkably close to emission profiles determined using more conventional techniques. The radial extension of the fluctuations is small enough for the method to be used and bandpass filtered signals indicate that fluctuations below 10kHz are cylindrically symmetric. The novel method is not sensitive to vessel window attenuation or wall reflections and can therefore complement the standard methods in the impurity emission reconstruction procedure.
Konishi, Takahiro; Nakajima, Kenichi; Okuda, Koichi; Yoneyama, Hiroto; Matsuo, Shinro; Shibutani, Takayuki; Onoguchi, Masahisa; Kinuya, Seigo
2017-07-01
Although IQ-single-photon emission computed tomography (SPECT) provides rapid acquisition and attenuation-corrected images, the unique technology may create characteristic distribution different from the conventional imaging. This study aimed to compare the diagnostic performance of IQ-SPECT using Japanese normal databases (NDBs) with that of the conventional SPECT for thallium-201 ( 201 Tl) myocardial perfusion imaging (MPI). A total of 36 patients underwent 1-day 201 Tl adenosine stress-rest MPI. Images were acquired with IQ-SPECT at approximately one-quarter of the standard time of conventional SPECT. Projection data acquired with the IQ-SPECT system were reconstructed via an ordered subset conjugate gradient minimizer method with or without scatter and attenuation correction (SCAC). Projection data obtained using the conventional SPECT were reconstructed via a filtered back projection method without SCAC. The summed stress score (SSS) was calculated using NDBs created by the Japanese Society of Nuclear Medicine working group, and scores were compared between IQ-SPECT and conventional SPECT using the acquisition condition-matched NDBs. The diagnostic performance of the methods for the detection of coronary artery disease was also compared. SSSs were 6.6 ± 8.2 for the conventional SPECT, 6.6 ± 9.4 for IQ-SPECT without SCAC, and 6.5 ± 9.7 for IQ-SPECT with SCAC (p = n.s. for each comparison). The SSS showed a strong positive correlation between conventional SPECT and IQ-SPECT (r = 0.921 and p < 0.0001), and the correlation between IQ-SPECT with and without SCAC was also good (r = 0.907 and p < 0.0001). Regarding diagnostic performance, the sensitivity, specificity, and accuracy were 80.8, 78.9, and 79.4%, respectively, for the conventional SPECT; 80.8, 80.3, and 82.0%, respectively, for IQ-SPECT without SCAC; and 88.5, 86.8, and 87.3%, respectively, for IQ-SPECT with SCAC, respectively. The area under the curve obtained via receiver operating characteristic analysis were 0.77, 0.80, and 0.86 for conventional SPECT, IQ-SPECT without SCAC, and IQ-SPECT with SCAC, respectively (p = n.s. for each comparison). When appropriate NDBs were used, the diagnostic performance of 201 Tl IQ-SPECT was comparable with that of the conventional system regardless of different characteristics of myocardial accumulation in the conventional system.
Comparison of Silent and Conventional MR Imaging for the Evaluation of Myelination in Children
Matsuo-Hagiyama, Chisato; Watanabe, Yoshiyuki; Tanaka, Hisashi; Takahashi, Hiroto; Arisawa, Atsuko; Yoshioka, Eri; Nabatame, Shin; Nakano, Sayaka; Tomiyama, Noriyuki
2017-01-01
Purpose: Silent magnetic resonance imaging (MRI) scans produce reduced acoustic noise and are considered more gentle for sedated children. The aim of this study was to compare the validity of T1- (T1W) and T2-weighted (T2W) silent sequences for myelination assessment in children with conventional spin-echo sequences. Materials and Methods: A total of 30 children (21 boys, 9 girls; age range: 1–83 months, mean age: 35.5 months, median age: 28.5 months) were examined using both silent and spin-echo sequences. Acoustic noise levels were analyzed and compared. The degree of myelination was qualitatively assessed via consensus, and T1W and T2W signal intensities were quantitatively measured by percent contrast. Results: Acoustic noise levels were significantly lower during silent sequences than during conventional sequences (P < 0.0001 for both T1W and T2W). Inter-method comparison indicated overall good to excellent agreement (T1W and T2W images, κ = 0.76 and 0.80, respectively); however, agreement was poor for cerebellar myelination on T1W images (κ = 0.14). The percent contrast of silent and conventional MRI sequences had a strong correlation (T1W, correlation coefficient [CC] = 0.76; T1W excluding the middle cerebellar peduncle, CC = 0.82; T2W, CC = 0.91). Conclusions: For brain MRI, silent sequences significantly reduced acoustic noise and provided diagnostic image quality for myelination evaluations; however, the two methods differed with respect to cerebellar delineation on T1W sequences. PMID:27795484
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-04
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment-trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. Copyright © 2018 Montesinos-Lopez et al.
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C.; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-01
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. PMID:29097376
Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi
2017-11-05
We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Lowdermilk, Warren H; Grele, Milton D
1949-01-01
A heat transfer investigation, which was an extension of a previously reported NACA investigation, was conducted with air flowing through an electrically heated inconel tube with a rounded entrance,an inside diameter of 0.402 inch, and a length of 24 inches over a range of conditions, which included Reynolds numbers up to 500,000, average surface temperatures up to 2050 degrees R, and heat-flux densities up to 150,000 Btu per hour per square foot. Conventional methods of correlating heat-transfer data wherein properties of the air were evaluated at the average bulk, film, and surface temperatures resulted in reductions of Nusselt number of about 38, 46, and 53 percent, respectively, for an increase in surface temperature from 605 degrees to 2050 degrees R at constant Reynolds number. A modified correlation method in which the properties of air were based on the surface temperature and the Reynolds number was modified by substituting the product of the density at the inside tube wall and the bulk velocity for the conventional mass flow per unit cross-sectional area, resulted in a satisfactory correlation of the data for the extended ranges of conditions investigated.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p < 0.01) between all digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
Point-point and point-line moving-window correlation spectroscopy and its applications
NASA Astrophysics Data System (ADS)
Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu
2008-07-01
In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.
Illés, Tamás; Somoskeöy, Szabolcs
2013-06-01
A new concept of vertebra vectors based on spinal three-dimensional (3D) reconstructions of images from the EOS system, a new low-dose X-ray imaging device, was recently proposed to facilitate interpretation of EOS 3D data, especially with regard to horizontal plane images. This retrospective study was aimed at the evaluation of the spinal layout visualized by EOS 3D and vertebra vectors before and after surgical correction, the comparison of scoliotic spine measurement values based on 3D vertebra vectors with measurements using conventional two-dimensional (2D) methods, and an evaluation of horizontal plane vector parameters for their relationship with the magnitude of scoliotic deformity. 95 patients with adolescent idiopathic scoliosis operated according to the Cotrel-Dubousset principle were subjected to EOS X-ray examinations pre- and postoperatively, followed by 3D reconstructions and generation of vertebra vectors in a calibrated coordinate system to calculate vector coordinates and parameters, as published earlier. Differences in values of conventional 2D Cobb methods and methods based on vertebra vectors were evaluated by means comparison T test and relationship of corresponding parameters was analysed by bivariate correlation. Relationship of horizontal plane vector parameters with the magnitude of scoliotic deformities and results of surgical correction were analysed by Pearson correlation and linear regression. In comparison to manual 2D methods, a very close relationship was detectable in vertebra vector-based curvature data for coronal curves (preop r 0.950, postop r 0.935) and thoracic kyphosis (preop r 0.893, postop r 0.896), while the found small difference in L1-L5 lordosis values (preop r 0.763, postop r 0.809) was shown to be strongly related to the magnitude of corresponding L5 wedge. The correlation analysis results revealed strong correlation between the magnitude of scoliosis and the lateral translation of apical vertebra in horizontal plane. The horizontal plane coordinates of the terminal and initial points of apical vertebra vectors represent this (r 0.701; r 0.667). Less strong correlation was detected in the axial rotation of apical vertebras and the magnitudes of the frontal curves (r 0.459). Vertebra vectors provide a key opportunity to visualize spinal deformities in all three planes simultaneously. Measurement methods based on vertebral vectors proved to be just as accurate and reliable as conventional measurement methods for coronal and sagittal plane parameters. In addition, the horizontal plane display of the curves can be studied using the same vertebra vectors. Based on the vertebra vectors data, during the surgical treatment of spinal deformities, the diminution of the lateral translation of the vertebras seems to be more important in the results of the surgical correction than the correction of the axial rotation.
Extinction of the soleus H reflex induced by conditioning stimulus given after test stimulus.
Hiraoka, Koichi
2002-02-01
To quantify the extinction of the soleus H reflex induced by a conditioning stimulus above the motor threshold to the post-tibial nerve applied 10-12 ms after a test stimulus (S2 method). Ten healthy subjects participated. The sizes of extinction induced by a test stimulus above the motor threshold (conventional method) and by the S2 method were measured. The size of the conditioned H reflex decreased as the intensity of the S2 conditioning stimulus increased. The decrease was less than that induced by the conventional method. The difference between the two methods correlated highly with the amount of orthodromically activated recurrent inhibition. When the S2 conditioning stimulus evoked an M wave that was roughly half of the maximum M wave, the decrease in the size of the conditioned H reflex depended on the size of the unconditioned H reflex. The S2 method allows us to observe extinction without changing the intensity of the test stimulus. The amount of the extinction depends partially on the size of the unconditioned H reflex. The difference in the sizes of extinction between the S2 and conventional methods should relate to recurrent inhibition.
Tuning time-frequency methods for the detection of metered HF speech
NASA Astrophysics Data System (ADS)
Nelson, Douglas J.; Smith, Lawrence H.
2002-12-01
Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.
Susceptibility of green and conventional building materials to microbial growth.
Mensah-Attipoe, J; Reponen, T; Salmela, A; Veijalainen, A-M; Pasanen, P
2015-06-01
Green building materials are becoming more popular. However, little is known about their ability to support or limit microbial growth. The growth of fungi was evaluated on five building materials. Two green, two conventional building materials and wood as a positive control were selected. The materials were inoculated with Aspergillus versicolor, Cladosporium cladosporioides and Penicillium brevicompactum, in the absence and presence of house dust. Microbial growth was assessed at four different time points by cultivation and determining fungal biomass using the N-acetylhexosaminidase (NAHA) enzyme assay. No clear differences were seen between green and conventional building materials in their susceptibility to support microbial growth. The presence of dust, an external source of nutrients, promoted growth of all the fungal species similarly on green and conventional materials. The results also showed a correlation coefficient ranging from 0.81 to 0.88 between NAHA activity and culturable counts. The results suggest that the growth of microbes on a material surface depends on the availability of organic matter rather than the classification of the material as green or conventional. NAHA activity and culturability correlated well indicating that the two methods used in the experiments gave similar trends for the growth of fungi on material surfaces. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography
Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji
2013-01-01
OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418
Electronic Cigarette Use by College Students
Sutfin, Erin L.; McCoy, Thomas P.; Morrell, Holly E. R.; Hoeppner, Bettina B.; Wolfson, Mark
2013-01-01
Background Electronic cigarettes, or ecigarettes, are battery operated devices that deliver nicotine via inhaled vapor. There is considerable controversy about the disease risk and toxicity of ecigarettes and empirical evidence on short- and long-term health effects is minimal. Limited data on e-cigarette use and correlates exist, and to our knowledge, no prevalence rates among U.S. college students have been reported. This study aimed to estimate the prevalence of ecigarette use and identify correlates of use among a large, multi-institution, random sample of college students. Methods 4,444 students from 8 colleges in North Carolina completed a Webbased survey in fall 2009. Results Ever use of ecigarettes was reported by 4.9% of students, with 1.5% reporting past month use. Correlates of ever use included male gender, Hispanic or “Other race” (compared to non-Hispanic Whites), Greek affiliation, conventional cigarette smoking and e-cigarette harm perceptions. Although e-cigarette use was more common among conventional cigarette smokers, 12% of ever e-cigarette users had never smoked a conventional cigarette. Among current cigarette smokers, e-cigarette use was negatively associated with lack of knowledge about e-cigarette harm, but was not associated with intentions to quit. Conclusions Although e-cigarette use was more common among conventional cigarette smokers, it was not exclusive to them. E-cigarette use was not associated with intentions to quit smoking among a sub-sample of conventional cigarette smokers. Unlike older, more established cigarette smokers, e-cigarette use by college students does not appear to be motivated by the desire to quit cigarette smoking. PMID:23746429
Muguregowda, Honnegowda Thittamaranahalli; Kumar, Pramod; Govindarama, Padmanabha Udupa Echalasara
2018-01-01
BACKGROUND Malondialdehyde (MDA) is an oxidant that causes damage to membranes, DNA, proteins, and lipids at the cellular level. Antioxidants minimize the effects of oxidants and thus help in formation of healthy granulation tissues with higher level of hydroxyproline and total protein. This study compared the effect of limited access dressing (LAD) with conventional closed dressing biochemically and histopathologically. METHODS Seventy-two 12-65 years old burn patients with mean wound size of 14 cm2 were divided to two groups of LAD (n=37), and conventional dressing groups (n=35). Various biochemical parameters were measured in granulation tissue. Histopathological analysis of the granulation tissue was studied too. RESULTS LAD group showed significant increase in hydroxyproline, total protein, GSH, and GPx and decrease in MDA levels compared to conventional dressing group. A significant negative correlation between GSH and MDA was noted in LAD group, but in conventional dressing group there was no significant correlation. A significant negative correlation between GPx and MDA was noticed in LAD group, but in conventional dressing group was not significant. There was a histologically fewer inflammatory cells, increased and well organized extracellular matrix deposit, more angiogenesis in LAD group after 10 days while the difference was significant between the groups. CONCLUSION Our study showed a significant reduction in oxidative stress biomarker of MDA, increase in hydroxyproline, total protein, antioxidants and amount of ECM deposition, number of blood vessels and a decrease in the amount of inflammatory cells and necrotic tissues in LAD group indicating the better healing effect of burn wounds. PMID:29651393
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Thelin, Eric P.; Nelson, David W.; Ghatan, Per Hamid; Bellander, Bo-Michael
2014-01-01
Background: Neuro-intensive care following traumatic brain injury (TBI) is focused on preventing secondary insults that may lead to irreversible brain damage. Microdialysis (MD) is used to detect deranged cerebral metabolism. The clinical usefulness of the MD is dependent on the regional localization of the MD catheter. The aim of this study was to analyze a new method of continuous cerebrospinal fluid (CSF) monitoring using the MD technique. The method was validated using conventional laboratory analysis of CSF samples. MD-CSF and regional MD-Brain samples were correlated to patient outcome. Materials and Methods: A total of 14 patients suffering from severe TBI were analyzed. They were monitored using (1) a MD catheter (CMA64-iView, n = 7448 MD samples) located in a CSF-pump connected to the ventricular drain and (2) an intraparenchymal MD catheter (CMA70, n = 8358 MD samples). CSF-lactate and CSF-glucose levels were monitored and were compared to MD-CSF samples. MD-CSF and MD-Brain parameters were correlated to favorable (Glasgow Outcome Score extended, GOSe 6–8) and unfavorable (GOSe 1–5) outcome. Results: Levels of glucose and lactate acquired with the CSF-MD technique could be correlated to conventional levels. The median MD recovery using the CMA64 catheter in CSF was 0.98 and 0.97 for glucose and lactate, respectively. Median MD-CSF (CMA 64) lactate (p = 0.0057) and pyruvate (p = 0.0011) levels were significantly lower in the favorable outcome group compared to the unfavorable group. No significant difference in outcome was found using the lactate:pyruvate ratio (LPR), or any of the regional MD-Brain monitoring in our analyzed cohort. Conclusion: This new technique of global MD-CSF monitoring correlates with conventional CSF levels of glucose and lactate, and the MD recovery is higher than previously described. Increase in lactate and pyruvate, without any effect on the LPR, correlates to unfavorable outcome, perhaps related to the presence of erythrocytes in the CSF. PMID:25228896
Brew, Christopher J; Simpson, Philip M; Whitehouse, Sarah L; Donnelly, William; Crawford, Ross W; Hubble, Matthew J W
2012-04-01
We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%; range, 117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. Thirty-two femoral heads on postoperative total hip arthroplasty radiographs were then measured and compared with the actual size. The mean absolute accuracy was within 0.5% of actual head size (range, 0%-3%) with a mean absolute difference of 0.16 mm (range, 0-1 mm; SD, 0.26 mm). Intraclass correlation coefficient showed excellent reliability for both interobserver and intraobserver measurements with intraclass correlation coefficient scores of 0.993 (95% CI, 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990 and 0.993 (95% CI, 0.980-0.997). Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Huerta, F. V.; Granados, I.; Aguirre, J.; Carrera, R. Á.
2017-12-01
Nowadays, in hydrocarbon industry, there is a need to optimize and reduce exploration costs in the different types of reservoirs, motivating the community specialized in the search and development of alternative exploration geophysical methods. This study show the reflection response obtained from a shale gas / oil deposit through the method of seismic interferometry of ambient vibrations in combination with Wavelet analysis and conventional seismic reflection techniques (CMP & NMO). The method is to generate seismic responses from virtual sources through the process of cross-correlation of records of Ambient Seismic Vibrations (ASV), collected in different receivers. The seismic response obtained is interpreted as the response that would be measured in one of the receivers considering a virtual source in the other. The acquisition of ASV records was performed in northern of Mexico through semi-rectangular arrays of multi-component geophones with instrumental response of 10 Hz. The in-line distance between geophones was 40 m while in cross-line was 280 m, the sampling used during the data collection was 2 ms and the total duration of the records was 6 hours. The results show the reflection response of two lines in the in-line direction and two in the cross-line direction for which the continuity of coherent events have been identified and interpreted as reflectors. There is certainty that the events identified correspond to reflections because the time-frequency analysis performed with the Wavelet Transform has allowed to identify the frequency band in which there are body waves. On the other hand, the CMP and NMO techniques have allowed to emphasize and correct the reflection response obtained during the correlation processes in the frequency band of interest. The results of the processing and analysis of ASV records through the seismic interferometry method have allowed us to see interesting results in light of the cross-correlation process in combination with the Wavelet analysis and conventional seismic reflection techniques. Therefore it was possible to recover the seismic response on each analyzed source-receiver pair, allowing us to obtain the reflection response of each analyzed seismic line.
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.
Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G
2011-05-01
Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion artifacts compared to conventional phase or displacement based sorting. This improved sorting algorithm is a straightforward extension of the two most common 4D CT sorting algorithms.
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
NASA Astrophysics Data System (ADS)
Warsito, W.; Noorhamdani, A. S.; Suratmo; Dwi Sapri, R.; Alkaroma, D.; Azhar, A. Z.
2018-04-01
Simple method has been used for the synthesis of benzimidazole derivative from citronellal in kaffir lime oil under microwave irradiation. These compounds were synthesized also by conventional heating for comparison. In addtion, microwave-assited synthesis was also compared between using to dichloromethane and methanol solvents with variation of reaction time for 30 to 70 minutes and 4 to 12 h for conventional heating. The 2-citronellyl benzimidazole compound synthesized were characterised by FT-IR, GC-MS, 1H and 13C NMR spectroscopy. Comparison between conventional and microwave-assisted synthesis was done by comparing between correlation of reaction time and percentage yield. The time optimum of microwave-assisted and conventional synthesis using dichloromethane solvent respectively at 60 minutes (yield 19.23%) and 8 hours (yield 11.54%). In addition, microwave-assited synthesis increasing 157.81 times compared by conventional heating. While using methanol solvent tends to increase linearly however the percentage of yield only 0.77 times of synthesis using dichloromethane solvent.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Ashraf-Khorassani, M; Yan, Q; Akin, A; Riley, F; Aurigemma, C; Taylor, L T
2015-10-30
Method development for normal phase flash liquid chromatography traditionally employs preliminary screening using thin layer chromatography (TLC) with conventional solvents on bare silica. Extension to green flash chromatography via correlation of TLC migration results, with conventional polar/nonpolar liquid mixtures, and packed column supercritical fluid chromatography (SFC) retention times, via gradient elution on bare silica with a suite of carbon dioxide mobile phase modifiers, is reported. Feasibility of TLC/SFC correlation is individually described for eight ternary mixtures for a total of 24 neutral analytes. The experimental criteria for TLC/SFC correlation was assumed to be as follows: SFC/UV/MS retention (tR) increases among each of the three resolved mixture components; while, TLC migration (Rf) decreases among the same resolved mixture components. Successful correlation of TLC to SFC was observed for most of the polar organic solvents tested, with the best results observed via SFC on bare silica with methanol as the CO2 modifier and TLC on bare silica with a methanol/dichloromethane mixture. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Kaiyu; Zhang, Zhiyong; Ding, Xiaoyan; Tian, Fang; Huang, Yuqing; Chen, Zhong; Fu, Riqiang
2018-02-01
The feasibility of using the spin-echo based diagonal peak suppression method in solid-state MAS NMR homonuclear chemical shift correlation experiments is demonstrated. A complete phase cycling is designed in such a way that in the indirect dimension only the spin diffused signals are evolved, while all signals not involved in polarization transfer are refocused for cancellation. A data processing procedure is further introduced to reconstruct this acquired spectrum into a conventional two-dimensional homonuclear chemical shift correlation spectrum. A uniformly 13C, 15N labeled Fmoc-valine sample and the transmembrane domain of a human protein, LR11 (sorLA), in native Escherichia coli membranes have been used to illustrate the capability of the proposed method in comparison with standard 13C-13C chemical shift correlation experiments.
Wijsman, Liselotte Willemijn; Cachucho, Ricardo; Hoevenaar-Blom, Marieke Peternella; Mooijaart, Simon Pieter; Richard, Edo
2017-01-01
Background Smartphone-assisted technologies potentially provide the opportunity for large-scale, long-term, repeated monitoring of cognitive functioning at home. Objective The aim of this proof-of-principle study was to evaluate the feasibility and validity of performing cognitive tests in people at increased risk of dementia using smartphone-based technology during a 6 months follow-up period. Methods We used the smartphone-based app iVitality to evaluate five cognitive tests based on conventional neuropsychological tests (Memory-Word, Trail Making, Stroop, Reaction Time, and Letter-N-Back) in healthy adults. Feasibility was tested by studying adherence of all participants to perform smartphone-based cognitive tests. Validity was studied by assessing the correlation between conventional neuropsychological tests and smartphone-based cognitive tests and by studying the effect of repeated testing. Results We included 151 participants (mean age in years=57.3, standard deviation=5.3). Mean adherence to assigned smartphone tests during 6 months was 60% (SD 24.7). There was moderate correlation between the firstly made smartphone-based test and the conventional test for the Stroop test and the Trail Making test with Spearman ρ=.3-.5 (P<.001). Correlation increased for both tests when comparing the conventional test with the mean score of all attempts a participant had made, with the highest correlation for Stroop panel 3 (ρ=.62, P<.001). Performance on the Stroop and the Trail Making tests improved over time suggesting a learning effect, but the scores on the Letter-N-back, the Memory-Word, and the Reaction Time tests remained stable. Conclusions Repeated smartphone-assisted cognitive testing is feasible with reasonable adherence and moderate relative validity for the Stroop and the Trail Making tests compared with conventional neuropsychological tests. Smartphone-based cognitive testing seems promising for large-scale data-collection in population studies. PMID:28546139
Effects of organic and conventional cultivation methods on composition of eggplant fruits.
Raigón, María D; Rodríguez-Burruezo, Adrián; Prohens, Jaime
2010-06-09
Organic food is associated by the general public with improved nutritional properties, and this has led to increasing demand for organic vegetables. The effects of organic and conventional cultivation methods on dry matter, protein, minerals, and total phenolic content has been studied for two successive years in two landraces and one commercial hybrid of eggplant. In the first year, organically produced eggplants had higher mean contents (expressed on a fresh weight basis) of K (196 vs 171 mg 100 g(-1)), Ca (11.1 vs 8.7 mg 100 g(-1)), Mg (6.0 vs 4.6 mg 100 g(-1)), and total phenolics (49.8 vs 38.2 mg 100 g(-1)) than conventionally grown eggplants. In the second year, in which matched plots having a history of organic management were cultivated following organic or conventional fertilization practices, organically produced eggplants still had higher contents of K (272 vs 249 mg 100 g(-1)) and Mg (8.8 vs 7.6), as well as of Cu (0.079 vs 0.065 mg 100 g(-1)), than conventionally fertilized eggplants. Conventionally cultivated eggplants had a higher polyphenol oxidase activity than organically cultivated ones (3.19 vs 2.17 enzyme activity units), although no differences in browning were observed. Important differences in mineral concentrations between years were detected, which resulted in many correlations among mineral contents being significant. The first component of the principal component analysis separates the eggplants according to year, whereas the second component separates them according to the cultivation method (organic or conventional). Overall, the results show that organic management and fertilization have a positive effect on the accumulation of certain beneficial minerals and phenolic compounds in eggplant and that organically and conventionally produced eggplants might be distinguished according to their composition profiles.
Statistical significance of seasonal warming/cooling trends
NASA Astrophysics Data System (ADS)
Ludescher, Josef; Bunde, Armin; Schellnhuber, Hans Joachim
2017-04-01
The question whether a seasonal climate trend (e.g., the increase of summer temperatures in Antarctica in the last decades) is of anthropogenic or natural origin is of great importance for mitigation and adaption measures alike. The conventional significance analysis assumes that (i) the seasonal climate trends can be quantified by linear regression, (ii) the different seasonal records can be treated as independent records, and (iii) the persistence in each of these seasonal records can be characterized by short-term memory described by an autoregressive process of first order. Here we show that assumption ii is not valid, due to strong intraannual correlations by which different seasons are correlated. We also show that, even in the absence of correlations, for Gaussian white noise, the conventional analysis leads to a strong overestimation of the significance of the seasonal trends, because multiple testing has not been taken into account. In addition, when the data exhibit long-term memory (which is the case in most climate records), assumption iii leads to a further overestimation of the trend significance. Combining Monte Carlo simulations with the Holm-Bonferroni method, we demonstrate how to obtain reliable estimates of the significance of the seasonal climate trends in long-term correlated records. For an illustration, we apply our method to representative temperature records from West Antarctica, which is one of the fastest-warming places on Earth and belongs to the crucial tipping elements in the Earth system.
Dynamic volume vs respiratory correlated 4DCT for motion assessment in radiation therapy simulation.
Coolens, Catherine; Bracken, John; Driscoll, Brandon; Hope, Andrew; Jaffray, David
2012-05-01
Conventional (i.e., respiratory-correlated) 4DCT exploits the repetitive nature of breathing to provide an estimate of motion; however, it has limitations due to binning artifacts and irregular breathing in actual patient breathing patterns. The aim of this work was to evaluate the accuracy and image quality of a dynamic volume, CT approach (4D(vol)) using a 320-slice CT scanner to minimize these limitations, wherein entire image volumes are acquired dynamically without couch movement. This will be compared to the conventional respiratory-correlated 4DCT approach (RCCT). 4D(vol) CT was performed and characterized on an in-house, programmable respiratory motion phantom containing multiple geometric and morphological "tumor" objects over a range of regular and irregular patient breathing traces obtained from 3D fluoroscopy and compared to RCCT. The accuracy of volumetric capture and breathing displacement were evaluated and compared with the ground truth values and with the results reported using RCCT. A motion model was investigated to validate the number of motion samples needed to obtain accurate motion probability density functions (PDF). The impact of 4D image quality on this accuracy was then investigated. Dose measurements using volumetric and conventional scan techniques were also performed and compared. Both conventional and dynamic volume 4DCT methods were capable of estimating the programmed displacement of sinusoidal motion, but patient breathing is known to not be regular, and obvious differences were seen for realistic, irregular motion. The mean RCCT amplitude error averaged at 4 mm (max. 7.8 mm) whereas the 4D(vol) CT error stayed below 0.5 mm. Similarly, the average absolute volume error was lower with 4D(vol) CT. Under irregular breathing, the 4D(vol) CT method provides a close description of the motion PDF (cross-correlation 0.99) and is able to track each object, whereas the RCCT method results in a significantly different PDF from the ground truth, especially for smaller tumors (cross-correlation ranging between 0.04 and 0.69). For the protocols studied, the dose measurements were higher in the 4D(vol) CT method (40%), but it was shown that significant mAs reductions can be achieved by a factor of 4-5 while maintaining image quality and accuracy. 4D(vol) CT using a scanner with a large cone-angle is a promising alternative for improving the accuracy with which respiration-induced motion can be characterized, particularly for patients with irregular breathing motion. This approach also generates 4DCT image data with a reduced total scan time compared to a RCCT scan, without the need for image binning or external respiration signals within the 16 cm scan length. Scan dose can be made comparable to RCCT by optimization of the scan parameters. In addition, it provides the possibility of measuring breathing motion for more than one breathing cycle to assess stability and obtain a more accurate motion PDF, which is currently not feasible with the conventional RCCT approach.
2013-01-01
Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917
Tsochatzis, Emmanouil D; Bladenopoulos, Konstantinos; Papageorgiou, Maria
2012-06-01
Tocotrienols and tocopherols (tocols) are important phytochemical compounds with antioxidant activity and potential benefits for human health. Among cereals, barley is a good source of tocols. In the present study the effect of two cultivation methods, organic and conventional, on the tocol content in 12 Greek barley varieties was investigated. A validated reverse phase high-performance liquid chromatography method (RP-HPLC) with fluorescence detection (excitation at 292 nm, emission at 335 nm) was applied along with direct solvent extraction with acetonitrile at a 1:30 (w/v) sample/solvent ratio for tocol quantification. The results showed statistically significant differences (P < 0.05) between the two cultivation methods (except for δ-tocopherol) as well as among varieties. In the case of organic cultivation the four homologues of tocotrienol (α-, β + γ- and δ-) increased, by 3.05-37.14% for α-tocotrienol, 15.51-41.09% for (β + γ)-tocotrienol and 30.45-196.61% for δ-tocotrienol, while those of tocopherol (α- and β + γ- but not δ-) decreased, by 5.90-36.34% for α-tocopherol and 2.84-46.49% for (β + γ)-tocopherol. A simple correlation analysis between tocols revealed a good correlation between (β + γ)-tocotrienol and δ-tocotrienol. Although there was a significant decrease in the important α-tocopherol in the varieties studied under organic cultivation, there was an overall increase in tocotrienol content. The cultivation method (organic or conventional) had an important effect on tocotrienol and tocopherol concentrations in barley. An overall increase in total tocol content and a clear increment in the tocotrienol/tocopherol ratio were observed. Copyright © 2012 Society of Chemical Industry.
Borland, Ron; Balmford, James; Hitchman, Sara C.; Cummings, K. Michael; Driezen, Pete; Thompson, Mary E.
2017-01-01
Introduction: The rapid rise in electronic cigarettes (ECs) globally has stimulated much debate about the relative risk and public health impact of this new emerging product category as compared to conventional cigarettes. The sale and marketing of ECs containing nicotine are banned in many countries (eg, Australia) but are allowed in others (eg, United Kingdom). This study examined prevalence and correlates of the belief that ECs are a lot less harmful than conventional cigarettes under the different regulatory environments in Australia (ie, more restrictive) and the United Kingdom (ie, less restrictive). Methods: Australian and UK data from the 2013 survey of the International Tobacco Control Four-Country project were analyzed. Results: More UK than Australian respondents (58.5% vs. 35.2%) believed that ECs are a lot less harmful than conventional cigarettes but more respondents in Australia than in the United Kingdom selected “Don’t Know” (36.5% vs. 17.1%). The proportion that responded “A little less, equally or more harmful” did not differ between countries. Correlates of the belief that ECs are “A lot less harmful” differed between countries, while correlates of “Don’t Know” response did not differ. Conclusions: Consistent with the less restrictive regulatory environment affecting the sale and marketing of ECs, smokers and recent ex-smokers in the United Kingdom were more likely to believe ECs were less harmful relative to conventional cigarettes compared to those in Australia. Implications: What this study adds: Among smokers and ex-smokers, this study found that the belief that ECs are (a lot) less harmful than conventional cigarettes was considerably higher in the United Kingdom than in Australia in 2013. The finding is consistent with the less restrictive regulatory environment for ECs in the United Kingdom, suggesting that the regulatory framework for ECs adopted by a country can affect smokers’ perceptions about the relative harmfulness of ECs, the group that stands to gain the most from having an accurate belief about the relative harms of ECs. PMID:27190403
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Matsui, Shogo; Kajikawa, Masato; Maruhashi, Tatsuya; Hashimoto, Haruki; Kihara, Yasuki; Chayama, Kazuaki; Goto, Chikara; Aibara, Yoshiki; Yusoff, Farina Mohamad; Kishimoto, Shinji; Nakashima, Ayumu; Noma, Kensuke; Kawaguchi, Tomohiro; Matsumoto, Takeo; Higashi, Yukihito
2018-05-04
Measurement of flow-mediated vasodilation (FMD) is an established method for assessing endothelial function. Measurement of FMD is useful for showing the relationship between atherosclerosis and endothelial function, mechanisms of endothelial dysfunction, and clinical implications including effects of interventions and cardiovascular events. To shorten and simplify the measurement of FMD, we have developed a novel technique named short time FMD (stFMD). We investigated the validity of stFMD for assessment of endothelial function compared with conventional FMD. We evaluated stFMD and conventional FMD in 82 subjects including patients with atherosclerotic risk factors and cardiovascular disease (66 men and 16 women, 57 ± 16 years). Both stFMD and conventional FMD were significantly correlated with age, systolic blood pressure, diastolic blood pressure and baseline brachial artery diameter. In addition, stFMD was significantly correlated with conventional FMD (r = 0.76, P < 0.001). Bland-Altman plot analysis showed good agreement between stFMD and conventional FMD. Moreover, stFMD in the at risk group and that in the cardiovascular disease group were significantly lower than that in the no risk group (4.6 ± 2.3% and 4.4 ± 2.2% vs. 7.3 ± 1.9%, P < 0.001, respectively). Optimal cutoff value of stFMD for diagnosing atherosclerosis was 7.0% (sensitivity of 71.0% and specificity of 85.0%). These findings suggest that measurement of stFMD, a novel and simple method, is useful for assessing endothelial function. Measurement of stFMD may be suitable for screening of atherosclerosis when repeated measurements of vascular function are required and when performing a clinical trial using a large population. URL for Clinical Trial: http://UMIN; Registration Number for Clinical Trial: UMIN000025458. Copyright © 2017 Elsevier B.V. All rights reserved.
Lee, Jong Hwa; Kim, Sang Beom; Lee, Kyeong Woo; Lee, Sook Joung; Park, Hyuntae; Kim, Dong Won
2017-09-01
The use of a whole-body vibration (WBV) therapy has recently been applied and investigated as a rehabilitation method for subacute stroke patients. To evaluate the effects of a WBV therapy on recovery of balance in subacute stroke patients who were unable to gain sitting balance. The conventional rehabilitation group (CG) received conventional physical therapy, including sitting balance training by a physical therapist, for 30 min a one session, for twice a day for five days a week for two weeks. The whole-body vibration group (VG) received one session of conventional physical therapy, and received WBV therapy instead of conventional physical therapy for 30 min a day for five days a week for two weeks. There were 15 patients in the CG and 15 patients in the VG who completed the two-week therapy. After the two-week therapy, both groups showed functional improvement. Patients in the VG improved functional ambulation categories, Berg balance scale, trunk impairment scale scores. But, no statistically significant correlations between the therapeutic methods and outcomes were observed in either group. Our results suggest that WBV therapy led to improvement of the recovery in balance recovery for subacute stroke patients. Because the WBV therapy was as effective as conventional physical therapy, we can consider a WBV therapy as a clinical method to improve the sitting balance of subacute stoke patients.
Center index method-an alternative for wear measurements with radiostereometry (RSA).
Dahl, Jon; Figved, Wender; Snorrason, Finnur; Nordsletten, Lars; Röhrl, Stephan M
2013-03-01
Radiostereometry (RSA) is considered to be the most precise and accurate method for wear-measurements in total hip replacement. Post-operative stereoradiographs has so far been necessary for wear measurement. Hence, the use of RSA has been limited to studies planned for RSA measurements. We compared a new RSA method for wear measurements that does not require previous radiographs with conventional RSA. Instead of comparing present stereoradiographs with post-operative ones, we developed a method for calculating the post-operative position of the center of the femoral head on the present examination and using this as the index measurement. We compared this alternative method to conventional RSA in 27 hips in an ongoing RSA study. We found a high degree of agreement between the methods for both mean proximal (1.19 mm vs. 1.14 mm) and mean 3D wear (1.52 mm vs. 1.44 mm) after 10 years. Intraclass correlation coefficients (ICC) were 0.958 and 0.955, respectively (p<0.001 for both ICCs). The results were also within the limits of agreement when plotted subject-by-subject in a Bland-Altman plot. Our alternative method for wear measurements with RSA offers comparable results to conventional RSA measurements. It allows precise wear measurements without previous radiological examinations. Copyright © 2012 Orthopaedic Research Society.
Optical efficiency of solar concentrators by a reverse optical path method.
Parretta, A; Antonini, A; Milan, E; Stefancich, M; Martinelli, G; Armani, M
2008-09-15
A method for the optical characterization of a solar concentrator, based on the reverse illumination by a Lambertian source and measurement of intensity of light projected on a far screen, has been developed. It is shown that the projected light intensity is simply correlated to the angle-resolved efficiency of a concentrator, conventionally obtained by a direct illumination procedure. The method has been applied by simulating simple reflective nonimaging and Fresnel lens concentrators.
A computerized scheme for lung nodule detection in multiprojection chest radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Wei; Li Qiang; Boyce, Sarah J.
2012-04-15
Purpose: Our previous study indicated that multiprojection chest radiography could significantly improve radiologists' performance for lung nodule detection in clinical practice. In this study, the authors further verify that multiprojection chest radiography can greatly improve the performance of a computer-aided diagnostic (CAD) scheme. Methods: Our database consisted of 59 subjects, including 43 subjects with 45 nodules and 16 subjects without nodules. The 45 nodules included 7 real and 38 simulated ones. The authors developed a conventional CAD scheme and a new fusion CAD scheme to detect lung nodules. The conventional CAD scheme consisted of four steps for (1) identification ofmore » initial nodule candidates inside lungs, (2) nodule candidate segmentation based on dynamic programming, (3) extraction of 33 features from nodule candidates, and (4) false positive reduction using a piecewise linear classifier. The conventional CAD scheme processed each of the three projection images of a subject independently and discarded the correlation information between the three images. The fusion CAD scheme included the four steps in the conventional CAD scheme and two additional steps for (5) registration of all candidates in the three images of a subject, and (6) integration of correlation information between the registered candidates in the three images. The integration step retained all candidates detected at least twice in the three images of a subject and removed those detected only once in the three images as false positives. A leave-one-subject-out testing method was used for evaluation of the performance levels of the two CAD schemes. Results: At the sensitivities of 70%, 65%, and 60%, our conventional CAD scheme reported 14.7, 11.3, and 8.6 false positives per image, respectively, whereas our fusion CAD scheme reported 3.9, 1.9, and 1.2 false positives per image, and 5.5, 2.8, and 1.7 false positives per patient, respectively. The low performance of the conventional CAD scheme may be attributed to the high noise level in chest radiography, and the small size and low contrast of most nodules. Conclusions: This study indicated that the fusion of correlation information in multiprojection chest radiography can markedly improve the performance of CAD scheme for lung nodule detection.« less
A novel iris patterns matching algorithm of weighted polar frequency correlation
NASA Astrophysics Data System (ADS)
Zhao, Weijie; Jiang, Linhua
2014-11-01
Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc
2015-09-21
In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.
Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.
2015-01-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614
Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T
2013-09-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.
Counting the peaks in the excitation function for precompound processes
NASA Astrophysics Data System (ADS)
Bonetti, R.; Hussein, M. S.; Mello, P. A.
1983-08-01
The "counting of maxima" method of Brink and Stephen, conventionally used for the extraction of the correlation width of statistical (compound nucleus) reactions, is generalized to include precompound processes as well. It is found that this method supplies an important independent check of the results obtained from autocorrelation studies. An application is made to the reaction 25Mg(3He,p). NUCLEAR REACTIONS Statistical multistep compound processes discussed.
NASA Astrophysics Data System (ADS)
Kähler, Sven; Olsen, Jeppe
2017-11-01
A computational method is presented for systems that require high-level treatments of static and dynamic electron correlation but cannot be treated using conventional complete active space self-consistent field-based methods due to the required size of the active space. Our method introduces an efficient algorithm for perturbative dynamic correlation corrections for compact non-orthogonal MCSCF calculations. In the algorithm, biorthonormal expansions of orbitals and CI-wave functions are used to reduce the scaling of the performance determining step from quadratic to linear in the number of configurations. We describe a hierarchy of configuration spaces that can be chosen for the active space. Potential curves for the nitrogen molecule and the chromium dimer are compared for different configuration spaces. Already the most compact spaces yield qualitatively correct potentials that with increasing size of configuration spaces systematically approach complete active space results.
Fiber fault location utilizing traffic signal in optical network.
Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi
2013-10-07
We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.
NASA Astrophysics Data System (ADS)
Parracino, Stefano; Richetta, Maria; Gelfusa, Michela; Malizia, Andrea; Bellecci, Carlo; De Leo, Leonardo; Perrimezzi, Carlo; Fin, Alessandro; Forin, Marco; Giappicucci, Francesca; Grion, Massimo; Marchese, Giuseppe; Gaudio, Pasquale
2016-10-01
Urban air pollution causes deleterious effects on human health and the environment. To meet stringent standards imposed by the European Commission, advanced measurement methods are required. Remote sensing techniques, such as light detection and ranging (LiDAR), can be a valuable option for evaluating particulate matter (PM), emitted by vehicles in urban traffic, with high sensitivity and in shorter time intervals. Since air quality problems persist not only in large urban areas, a measuring campaign was specifically performed in a suburban area of Crotone, Italy, using both a compact LiDAR system and conventional instruments for real-time vehicle emissions monitoring along a congested road. First results reported in this paper show a strong dependence between variations of LiDAR backscattering signals and traffic-related air pollution levels. Moreover, time-resolved LiDAR data averaged in limited regions, directly above conventional monitoring stations at the border of an intersection, were found to be linearly correlated to the PM concentration levels with a correlation coefficient between 0.75 and 0.84.
One device, one equation: the simplest way to objectively evaluate psoriasis severity.
Choi, Jae Woo; Kim, Bo Ri; Choi, Chong Won; Youn, Sang Woong
2015-02-01
The erythema, scale and thickness of psoriasis lesions could be converted to bioengineering parameters. An objective psoriasis severity assessment is advantageous in terms of accuracy and reproducibility over conventional severity assessment. We aimed to formulate an objective psoriasis severity index with a single bioengineering device that can possibly substitute the conventional subjective Psoriasis Severity Index. A linear regression analysis was performed to derive the formula with the subjective Psoriasis Severity Index as the dependent variable and various bioengineering parameters determined from 157 psoriasis lesions as independent variables. The construct validity of the objective Psoriasis Severity Index was evaluated with an additional 30 psoriasis lesions through a Pearson correlation analysis. The formula is composed of hue and brightness, which are sufficiently obtainable with a Colorimeter alone. A very strong positive correlation was found between the objective and subjective psoriasis severity indexes. The objective Psoriasis Severity Index is a novel, practical and valid assessment method that can substitute the conventional one. Combined with subjective area assessment, it could further replace the Psoriasis Area and Severity Index which is currently most popular. © 2014 Japanese Dermatological Association.
A simple model for the dependence on local detonation speed of the product entropy
NASA Astrophysics Data System (ADS)
Hetherington, David C.; Whitworth, Nicholas J.
2012-03-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.
A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)
NASA Astrophysics Data System (ADS)
Hetherington, David
2011-06-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.
NASA Astrophysics Data System (ADS)
Lee, Kang Il
2015-01-01
A new method for measuring the normalized broadband ultrasound attenuation (nBUA) in trabecular bone by using a bidirectional transverse transmission technique was proposed and validated with measurements obtained by using the conventional transverse transmission technique. There was no significant difference between the nBUA measurements obtained for 14 bovine femoral trabecular bone samples by using the bidirectional and the conventional transverse transmission techniques. The nBUA measured by using the two transverse transmission techniques showed strong positive correlations of r = 0.87 to 0.88 with the apparent bone density, consistent with the behavior in human trabecular bone invitro. We expect that the new method can be usefully applied for improved accuracy and precision in clinical measurements.
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
Woynaroski, Tiffany; Oller, D. Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul
2017-01-01
Theory and research suggest that vocal development predicts “useful speech” in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently “in development” and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. PMID:27459107
Vaccaro, Calogero; Busetto, Roberto; Bernardini, Daniele; Anselmi, Carlo; Zotti, Alessandro
2012-03-01
To evaluate the precision and accuracy of assessing bone mineral density (BMD) by use of mean gray value (MGV) on digitalized and digital images of conventional and digital radiographs, respectively, of ex vivo bovine and equine bone specimens in relation to the gold-standard technique of dual-energy x-ray absorptiometry (DEXA). Left and right metatarsal bones from 11 beef cattle and right femurs from 2 horses. Bovine specimens were imaged by use of conventional radiography, whereas equine specimens were imaged by use of computed radiography (digital radiography). Each specimen was subsequently scanned by use of the same DEXA equipment. The BMD values resulting from each DEXA scan were paired with the MGVs obtained by use of software on the corresponding digitalized or digital radiographic image. The MGV analysis of digitalized and digital x-ray images was a precise (coefficient of variation, 0.1 and 0.09, respectively) and highly accurate method for assessing BMD, compared with DEXA (correlation coefficient, 0.910 and 0.937 for conventional and digital radiography, respectively). The high correlation between MGV and BMD indicated that MGV analysis may be a reliable alternative to DEXA in assessing radiographic bone density. This may provide a new, inexpensive, and readily available estimate of BMD.
Why human milk is more nutritious than cow milk
NASA Astrophysics Data System (ADS)
Voorhoeve, Niels; Allan, Douglas C.; Moret, M. A.; Zebende, G. F.; Phillips, J. C.
2018-05-01
The evolution of milk, the key infant nutrient, is analyzed using a novel thermodynamic molecular method. The method is general, and it has many advantages compared to conventional molecular dynamics simulations. It is much simpler, and it connects amino acid sequences directly to function, often without knowing detailed "folded" globular structures. It emphasizes synchronized critical fluctuations due to long-range correlations in globular curvatures. The titled question has not been answered, or even discussed successfully, by other molecular methods.
NASA Astrophysics Data System (ADS)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the "resolution of the identity second-order Møller-Plesset perturbation theory" (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
Ezoddini Ardakani, Fatemeh; Zangoie Booshehri, Maryam; Banadaki, Seyed Hossein Saeed; Nafisi-Moghadam, Reza
2012-01-01
Background Scaphoid fractures are the most common type of carpal fractures. Objectives The aim of the study was to compare the diagnostic value of panoramic and conventional radiographs of the wrist in scaphoid fractures. Patients and Methods The panoramic and conventional radiographs of 122 patients with acute and chronic wrist trauma were studied. The radiographs were analyzed and examined by two independent radiologist observers; one physician radiologist and one maxillofacial radiologist. The final diagnosis was made by an orthopedic specialist. Kappa test was used for statistical calculations, inter- and intra-observer agreement and correlation between the two techniques. Results Wrist panoramic radiography was more accurate than conventional radiography for ruling out scaphoid fractures. There was an agreement in 85% or more of the cases. Agreement values were higher with better inter and intra observer agreement for panoramic examinations than conventional radiographic examinations. Conclusion The panoramic examination of the wrist is a useful technique for the diagnosis and follow-up of scaphoid fractures. Its use is recommended as a complement to conventional radiography in cases with inconclusive findings. PMID:23599708
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
Comparative study of Sperm Motility Analysis System and conventional microscopic semen analysis
KOMORI, KAZUHIKO; ISHIJIMA, SUMIO; TANJAPATKUL, PHANU; FUJITA, KAZUTOSHI; MATSUOKA, YASUHIRO; TAKAO, TETSUYA; MIYAGAWA, YASUSHI; TAKADA, SHINGO; OKUYAMA, AKIHIKO
2006-01-01
Background and Aim: Conventional manual sperm analysis still shows variations in structure, process and outcome although World Health Organization (WHO) guidelines present an appropriate method for sperm analysis. In the present study a new system for sperm analysis, Sperm Motility Analysis System (SMAS), was compared with manual semen analysis based on WHO guidelines. Materials and methods: Samples from 30 infertility patients and 21 healthy volunteers were subjected to manual microscopic analysis and SMAS analysis, simultaneously. We compared these two methods with respect to sperm concentration and percent motility. Results: Sperm concentrations obtained by SMAS (Csmas) and manual microscopic analyses on WHO guidelines (Cwho) were strongly correlated (Cwho = 1.325 × Csmas; r = 0.95, P < 0.001). If we excluded subjects with Csmas values >30 × 106 sperm/mL, the results were more similar (Cwho = 1.022 × Csmas; r = 0.81, P < 0.001). Percent motility obtained by SMAS (Msmas) and manual analysis on WHO guidelines (Mwho) were strongly correlated (Mwho = 1.214 × Msmas; r = 0.89, P < 0.001). Conclusions: The data indicate that the results of SMAS and those of manual microscopic sperm analyses based on WHO guidelines are strongly correlated. SMAS is therefore a promising system for sperm analysis. (Reprod Med Biol 2006; 5: 195–200) PMID:29662398
2012-09-01
structures that are impossible with current methods . Using techniques to concurrently stain and three-dimensionally analyze many cell types and...new methods allowed us to visualize structures in these damaged samples that were not visible using conventional techniques allowing us modify our...AWARD NUMBER: W81XWH-11-1-0705 TITLE: The Generation of Novel MR Imaging Techniques to
Removal of Giardia and Cryptosporidium in drinking water treatment: a pilot-scale study.
Hsu, Bing Mu; Yeh, Hsuan Hsien
2003-03-01
Giardia and Cryptosporidium have emerged as waterborne pathogens of concern for public health. The aim of this study is to examine both parasites in the water samples taken from three pilot-scale plant processes located in southern Taiwan, to upgrade the current facilities. Three processes include: conventional process without prechlorination (Process 1), conventional process plus ozonation and pellet softening (Process 2), and integrated membrane process (MF plus NF) followed conventional process (Process 3). The detection methods of both parasites are modified from USEPA Methods 1622 and 1623. Results indicated that coagulation, sedimentation and filtration removed the most percentage of both protozoan parasites. The pre-ozonation step can destruct both parasites, especially for Giardia cysts. The microfiltration systems can intercept Giardia cysts and Cryptosporidium oocysts completely. A significant correlation between water turbidity and Cryptosporidium oocysts was found in this study. The similar results were also found between three kinds of particles (phi=3-5,5-8 and 8-10 microm) and Cryptosporidium oocysts.
TORRES, Fernanda Ferrari Esteves; BOSSO-MARTELO, Roberta; ESPIR, Camila Galletti; CIRELLI, Joni Augusto; GUERREIRO-TANOMARU, Juliane Maria; TANOMARU-FILHO, Mario
2017-01-01
Abstract Objective To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. Material and Methods 7 Results The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). Conclusions ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests. PMID:28877275
Wald, Lawrence L; Polimeni, Jonathan R
2017-07-01
We review the components of time-series noise in fMRI experiments and the effect of image acquisition parameters on the noise. In addition to helping determine the total amount of signal and noise (and thus temporal SNR), the acquisition parameters have been shown to be critical in determining the ratio of thermal to physiological induced noise components in the time series. Although limited attention has been given to this latter metric, we show that it determines the degree of spatial correlations seen in the time-series noise. The spatially correlations of the physiological noise component are well known, but recent studies have shown that they can lead to a higher than expected false-positive rate in cluster-wise inference based on parametric statistical methods used by many researchers. Based on understanding the effect of acquisition parameters on the noise mixture, we propose several acquisition strategies that might be helpful reducing this elevated false-positive rate, such as moving to high spatial resolution or using highly-accelerated acquisitions where thermal sources dominate. We suggest that the spatial noise correlations at the root of the inflated false-positive rate problem can be limited with these strategies, and the well-behaved spatial auto-correlation functions (ACFs) assumed by the conventional statistical methods are retained if the high resolution data is smoothed to conventional resolutions. Copyright © 2017 Elsevier Inc. All rights reserved.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding
Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro
2015-01-01
Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Zhu, Xudong; Arman, Bessembayev; Chu, Ju; Wang, Yonghong; Zhuang, Yingping
2017-05-01
To develop an efficient cost-effective screening process to improve production of glucoamylase in Aspergillus niger. The cultivation of A. niger was achieved with well-dispersed morphology in 48-deep-well microtiter plates, which increased the throughput of the samples compared to traditional flask cultivation. There was a close negative correlation between glucoamylase and its pH of the fermentation broth. A novel high-throughput analysis method using Methyl Orange was developed. When compared to the conventional analysis method using 4-nitrophenyl α-D-glucopyranoside as substrate, a correlation coefficient of 0.96 by statistical analysis was obtained. Using this novel screening method, we acquired a strain with an activity of 2.2 × 10 3 U ml -1 , a 70% higher yield of glucoamylase than its parent strain.
Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam
2018-01-01
The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.
Incoherent Diffractive Imaging via Intensity Correlations of Hard X Rays
NASA Astrophysics Data System (ADS)
Classen, Anton; Ayyer, Kartik; Chapman, Henry N.; Röhlsberger, Ralf; von Zanthier, Joachim
2017-08-01
Established x-ray diffraction methods allow for high-resolution structure determination of crystals, crystallized protein structures, or even single molecules. While these techniques rely on coherent scattering, incoherent processes like fluorescence emission—often the predominant scattering mechanism—are generally considered detrimental for imaging applications. Here, we show that intensity correlations of incoherently scattered x-ray radiation can be used to image the full 3D arrangement of the scattering atoms with significantly higher resolution compared to conventional coherent diffraction imaging and crystallography, including additional three-dimensional information in Fourier space for a single sample orientation. We present a number of properties of incoherent diffractive imaging that are conceptually superior to those of coherent methods.
Characterization of Organic and Conventional Coffee Using Neutron Activation Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. A. De Nadai Fernandes; P. Bode; F. S. Tagliaferro
2000-11-12
Countries importing organic coffee are facing the difficulty of assessing the quality of the product to distinguish original organic coffee from other coffees, thereby eliminating possible fraud. Many analytical methods are matrix sensitive and require matrix-matching reference materials for validation, which are currently nonexistent. This work aims to establish the trace element characterization of organic and conventional Brazilian coffees and to establish correlations with the related soil and the type of fertilizer and agrochemicals applied. It was observed that the variability in element concentrations between the various types of coffee is not so large, which emphasizes the need for analyticalmore » methods of high accuracy, reproducibility, and a well-known uncertainty. Moreover, the analyses indicate that sometimes the coffee packages may contain some soil remnants.« less
Goodwin, Cody R; Sherrod, Stacy D; Marasco, Christina C; Bachmann, Brian O; Schramm-Sapyta, Nicole; Wikswo, John P; McLean, John A
2014-07-01
A metabolic system is composed of inherently interconnected metabolic precursors, intermediates, and products. The analysis of untargeted metabolomics data has conventionally been performed through the use of comparative statistics or multivariate statistical analysis-based approaches; however, each falls short in representing the related nature of metabolic perturbations. Herein, we describe a complementary method for the analysis of large metabolite inventories using a data-driven approach based upon a self-organizing map algorithm. This workflow allows for the unsupervised clustering, and subsequent prioritization of, correlated features through Gestalt comparisons of metabolic heat maps. We describe this methodology in detail, including a comparison to conventional metabolomics approaches, and demonstrate the application of this method to the analysis of the metabolic repercussions of prolonged cocaine exposure in rat sera profiles.
Relationship among self-esteem, psychological reactance, and other personality variables.
Joubert, C E
1990-06-01
69 women and 42 men responded to the Coopersmith Self-esteem Inventory, the Revised UCLA Loneliness Scale, the Hong Psychological Reactance Scale, and the Famous Sayings test. Also, subjects rated their happiness using a Likert scale. Men scored significantly higher than did women on the UCLA Loneliness, Hostility, and Psychological Reactance measures, and lower on the Conventional Mores and Social Acquiescence measures. Loneliness scores positively correlated with Psychological Reactance scores and negatively with Self-esteem and Conventional Mores scores and with happiness self-ratings for both sexes. Men who scored higher on Psychological Reactance tended to score lower on Conventional Mores. Happiness ratings correlated negatively with Psychological Reactance for all subjects and positively with Conventional Mores for men subjects only. Women's self-esteem scores correlated positively with self-ratings of happiness and negatively with Psychological Reactance, Hostility, and Fear of Failure. Finally, women's Psychological Reactance scores correlated positively with those on Bass's Fear of Failure scale.
Correlative Super-Resolution Microscopy: New Dimensions and New Opportunities.
Hauser, Meghan; Wojcik, Michal; Kim, Doory; Mahmoudi, Morteza; Li, Wan; Xu, Ke
2017-06-14
Correlative microscopy, the integration of two or more microscopy techniques performed on the same sample, produces results that emphasize the strengths of each technique while offsetting their individual weaknesses. Light microscopy has historically been a central method in correlative microscopy due to its widespread availability, compatibility with hydrated and live biological samples, and excellent molecular specificity through fluorescence labeling. However, conventional light microscopy can only achieve a resolution of ∼300 nm, undercutting its advantages in correlations with higher-resolution methods. The rise of super-resolution microscopy (SRM) over the past decade has drastically improved the resolution of light microscopy to ∼10 nm, thus creating exciting new opportunities and challenges for correlative microscopy. Here we review how these challenges are addressed to effectively correlate SRM with other microscopy techniques, including light microscopy, electron microscopy, cryomicroscopy, atomic force microscopy, and various forms of spectroscopy. Though we emphasize biological studies, we also discuss the application of correlative SRM to materials characterization and single-molecule reactions. Finally, we point out current limitations and discuss possible future improvements and advances. We thus demonstrate how a correlative approach adds new dimensions of information and provides new opportunities in the fast-growing field of SRM.
Ramadan, Wijdan H; Khreis, Noura A; Kabbara, Wissam K
2015-01-01
Background The aim of the study was to evaluate the simplicity, safety, patients’ preference, and convenience of the administration of insulin using the pen device versus the conventional vial/syringe in patients with diabetes. Methods This observational study was conducted in multiple community pharmacies in Lebanon. The investigators interviewed patients with diabetes using an insulin pen or conventional vial/syringe. A total of 74 questionnaires were filled over a period of 6 months. Answers were entered into the Statistical Package for Social Sciences (SPSS) software and Excel spreadsheet. t-test, logistic regression analysis, and correlation analysis were used in order to analyze the results. Results A higher percentage of patients from the insulin pen users group (95.2%) found the method easy to use as compared to only 46.7% of the insulin conventional users group (P 0.001, relative risk [RR]: 2.041, 95% confidence interval [CI]: 1.178–3.535). Moreover, 61.9% and 26.7% of pen users and conventional users, respectively, could read the scale easily (P 0.037, RR 2.321, 95% CI: 0.940–5.731), while 85.7% of pen users found it more convenient shifting to pen and 86.7% of the conventional users would want to shift to pen if it had the same cost. Pain perception was statistically different between the groups. A much higher percentage (76.2%) of pen users showed no pain during injection compared to only 26.7% of conventional users (P 0.003, RR 2.857, 95% CI: 1.194–6.838). Conclusion The insulin pen was significantly much easier to use and less painful than the conventional vial/syringe. Proper education on the methods of administration/storage and disposal of needles/syringes is needed in both groups. PMID:25848231
Directly imaging steeply-dipping fault zones in geothermal fields with multicomponent seismic data
Chen, Ting; Huang, Lianjie
2015-07-30
For characterizing geothermal systems, it is important to have clear images of steeply-dipping fault zones because they may confine the boundaries of geothermal reservoirs and influence hydrothermal flow. Elastic reverse-time migration (ERTM) is the most promising tool for subsurface imaging with multicomponent seismic data. However, conventional ERTM usually generates significant artifacts caused by the cross correlation of undesired wavefields and the polarity reversal of shear waves. In addition, it is difficult for conventional ERTM to directly image steeply-dipping fault zones. We develop a new ERTM imaging method in this paper to reduce these artifacts and directly image steeply-dipping fault zones.more » In our new ERTM method, forward-propagated source wavefields and backward-propagated receiver wavefields are decomposed into compressional (P) and shear (S) components. Furthermore, each component of these wavefields is separated into left- and right-going, or downgoing and upgoing waves. The cross correlation imaging condition is applied to the separated wavefields along opposite propagation directions. For converted waves (P-to-S or S-to-P), the polarity correction is applied to the separated wavefields based on the analysis of Poynting vectors. Numerical imaging examples of synthetic seismic data demonstrate that our new ERTM method produces high-resolution images of steeply-dipping fault zones.« less
Blazer, Vicki S.; Macleod, Alexander H; Matsche, Mark A; Yonkos, Lance T
2017-01-01
Intersex in wild fish populations has received considerable attention in the scientific literature and public media. Conventional detection of testicular oocytes (TO), the presence of immature oocytes within testis of male fish, employs transverse sectioning of excised testis and is lethal. This present study used a non-lethal laparoscopic technique to collect biopsies of testis from black bass, entering the body cavity via the genital pore. Detection of TO was compared between biopsy and conventional methods using 79 smallmouth bass (SMB) Micropterus dolomieu from 8 sites and 68 largemouth bass (LMB) M. salmoides from 4 sites. Both methods performed similarly at sites where TO severity was moderate or high (6 of 8 SMB sites) while transverse sectioning resulted in superior TO detection at sites where severity was low (2 of 8 SMB sites and all 4 LMB sites). In SMB, TO prevalence by transverse and biopsy methods was strongly correlated across sites (r2 = 0.81) and severity reported by enumeration of TO was moderately correlated across sites (r2 = 0.59). Survival of a subset of LMB (n = 20) to 28-d after laparoscopic surgery was 90%. This research indicates that laparoscopy may be useful for monitoring the prevalence and severity of TO in Micropterus species, particularly when lethal sampling is precluded.
Bourantas, Christos V; Papafaklis, Michail I; Athanasiou, Lambros; Kalatzis, Fanis G; Naka, Katerina K; Siogkas, Panagiotis K; Takahashi, Saeko; Saito, Shigeru; Fotiadis, Dimitrios I; Feldman, Charles L; Stone, Peter H; Michalis, Lampros K
2013-09-01
To develop and validate a new methodology that allows accurate 3-dimensional (3-D) coronary artery reconstruction using standard, simple angiographic and intravascular ultrasound (IVUS) data acquired during routine catheterisation enabling reliable assessment of the endothelial shear stress (ESS) distribution. Twenty-two patients (22 arteries: 7 LAD; 7 LCx; 8 RCA) who underwent angiography and IVUS examination were included. The acquired data were used for 3-D reconstruction using a conventional method and a new methodology that utilised the luminal 3-D centreline to place the detected IVUS borders and anatomical landmarks to estimate their orientation. The local ESS distribution was assessed by computational fluid dynamics. In corresponding consecutive 3 mm segments, lumen, plaque and ESS measurements in the 3-D models derived by the centreline approach were highly correlated to those derived from the conventional method (r>0.98 for all). The centreline methodology had a 99.5% diagnostic accuracy for identifying segments exposed to low ESS and provided similar estimations to the conventional method for the association between the change in plaque burden and ESS (centreline method: slope= -1.65%/Pa, p=0.078; conventional method: slope= -1.64%/Pa, p=0.084; p =0.69 for difference between the two methodologies). The centreline methodology provides geometrically correct models and permits reliable ESS computation. The ability to utilise data acquired during routine coronary angiography and IVUS examination will facilitate clinical investigation of the role of local ESS patterns in the natural history of coronary atherosclerosis.
Dicker, Frank; Schnittger, Susanne; Haferlach, Torsten; Kern, Wolfgang; Schoch, Claudia
2006-11-01
Compared with fluorescence in situ hybridization (FISH), conventional metaphase cytogenetics play only a minor prognostic role in chronic lymphocytic leukemia (CLL) so far, due to technical problems resulting from limited proliferation of CLL cells in vitro. Here, we present a simple method for in vitro stimulation of CLL cells that overcomes this limitation. In our unselected patient population, 125 of 132 cases could be successfully stimulated for metaphase generation by culture with the immunostimulatory CpG-oligonucleotide DSP30 plus interleukin 2. Of 125 cases, 101 showed chromosomal aberrations. The aberration rate is comparable to the rate detected by parallel interphase FISH. In 47 patients, conventional cytogenetics detected additional aberrations not detected by FISH analysis. A complex aberrant karyotype, defined as one having at least 3 aberrations, was detected in 30 of 125 patients, compared with only one such case as defined by FISH. Conventional cytogenetics frequently detected balanced and unbalanced translocations. A significant correlation of the poor-prognosis unmutated IgV(H) status with unbalanced translocations and of the likewise poor-prognosis CD38 expression to balanced translocations and complex aberrant karyotype was found. We demonstrate that FISH analysis underestimates the complexity of chromosomal aberrations in CLL. Therefore, conventional cytogenetics may define subgroups of patients with high risk of progression.
de Almeida, Marcio Rodrigues; Futagami, Cristina; Conti, Ana Cláudia de Castro Ferreira; Oltramari-Navarro, Paula Vanessa Pedron; Navarro, Ricardo de Lima
2015-01-01
OBJECTIVE: The aim of the present study was to compare dentoalveolar changes in mandibular arch, regarding transversal measures and buccal bone thickness, in patients undergoing the initial phase of orthodontic treatment with self-ligating or conventional bracket systems. METHODS: A sample of 25 patients requiring orthodontic treatment was assessed based on the bracket type. Group 1 comprised 13 patients bonded with 0.022-in self-ligating brackets (SLB). Group 2 included 12 patients bonded with 0.022-in conventional brackets (CLB). Cone-beam computed tomography (CBCT) scans and a 3D program (Dolphin) assessed changes in transversal width of buccal bone (TWBB) and buccal bone thickness (BBT) before (T1) and 7 months after treatment onset (T2). Measurements on dental casts were performed using a digital caliper. Differences between and within groups were analyzed by Student's t-test; Pearson correlation coefficient was also calculated. RESULTS: Significant mandibular expansion was observed for both groups; however, no significant differences were found between groups. There was significant decrease in mandibular buccal bone thickness and transversal width of buccal bone in both groups. There was no significant correlation between buccal bone thickness and dental arch expansion. CONCLUSIONS: There were no significant differences between self-ligating brackets and conventional brackets systems regarding mandibular arch expansion and changes in buccal bone thickness or transversal width of buccal bone. PMID:26154456
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
NASA Astrophysics Data System (ADS)
Federico, Alejandro; Kaufmann, Guillermo H.
2004-08-01
We evaluate the application of the Wigner-Ville distribution (WVD) to measure phase gradient maps in digital speckle pattern interferometry (DSPI), when the generated correlation fringes present phase discontinuities. The performance of the WVD method is evaluated using computer-simulated fringes. The influence of the filtering process to smooth DSPI fringes and additional drawbacks that emerge when this method is applied are discussed. A comparison with the conventional method based on the continuous wavelet transform in the stationary phase approximation is also presented.
Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka
2015-11-01
[Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking.
Polechoński, Jacek; Mynarski, Władysław; Nawrocka, Agnieszka
2015-01-01
[Purpose] The objective of this study was to evaluate the usefulness of pedometry and accelerometry in the measurement of the energy expenditures in Nordic walking and conventional walking as diagnostic parameters. [Subjects and Methods] The study included 20 female students (age, 24 ± 2.3 years). The study used three types of measuring devices, namely a heart rate monitor (Polar S610i), a Caltrac accelerometer, and a pedometer (Yamax SW-800). The walking pace at the level of 110 steps/min was determined by using a metronome. [Results] The students who walked with poles covered a distance of 1,000 m at a speed 36.3 sec faster and with 65.5 fewer steps than in conventional walking. Correlation analysis revealed a moderate interrelationship between the results obtained with a pedometer and those obtained with an accelerometer during Nordic walking (r = 0.55) and a high correlation during conventional walking (r = 0.85). [Conclusion] A pedometer and Caltrac accelerometer should not be used as alternative measurement instruments in the comparison of energy expenditure in Nordic walking. PMID:26696730
Relationship between Testicular Volume and Conventional or Nonconventional Sperm Parameters
Condorelli, Rosita; Calogero, Aldo E.; La Vignera, Sandro
2013-01-01
Background. Reduced testicular volume (TV) (<12 cm3) is associated with lower testicular function. Several studies explored the conventional sperm parameters (concentration, motility, and morphology) and the endocrine function (gonadotropins and testosterone serum concentrations) in the patients with reduction of TV. No other parameters have been examined. Aim. This study aims at evaluating some biofunctional sperm parameters by flow cytometry in the semen of men with reduced TV compared with that of subjects with normal TV. Methods. 78 patients without primary scrotal disease were submitted to ultrasound evaluation of the testis. They were divided into two groups according to testicular volume: A Group, including 40 patients with normal testicular volume (TV > 15 cm3) and B Group, including 38 patients with reduced testicular volume (TV ≤ 12 cm3). All patients underwent serum hormone concentration, conventional and biofunctional (flow cytometry) sperm parameters evaluation. Results. With regard to biofunctional sperm parameters, all values (mitochondrial membrane potential, phosphatidylserine externalization, chromatin compactness, and DNA fragmentation) were strongly negatively correlated with testicular volume (P < 0.0001). Conclusions. This study for the first time in the literature states that the biofunctional sperm parameters worsen and with near linear correlation, with decreasing testicular volume. PMID:24089610
Dietary assessment and self-monitoring with nutrition applications for mobile devices.
Lieffers, Jessica R L; Hanning, Rhona M
2012-01-01
Nutrition applications for mobile devices (e.g., personal digital assistants, smartphones) are becoming increasingly accessible and can assist with the difficult task of intake recording for dietary assessment and self-monitoring. This review is a compilation and discussion of research on this tool for dietary intake documentation in healthy populations and those trying to lose weight. The purpose is to compare this tool with conventional methods (e.g., 24-hour recall interviews, paper-based food records). Research databases were searched from January 2000 to April 2011, with the following criteria: healthy or weight loss populations, use of a mobile device nutrition application, and inclusion of at least one of three measures, which were the ability to capture dietary intake in comparison with conventional methods, dietary self-monitoring adherence, and changes in anthropometrics and/or dietary intake. Eighteen studies are discussed. Two application categories were identified: those with which users select food and portion size from databases and those with which users photograph their food. Overall, positive feedback was reported with applications. Both application types had moderate to good correlations for assessing energy and nutrient intakes in comparison with conventional methods. For self-monitoring, applications versus conventional techniques (often paper records) frequently resulted in better self-monitoring adherence, and changes in dietary intake and/or anthropometrics. Nutrition applications for mobile devices have an exciting potential for use in dietetic practice.
Li, Yuelin; Root, James C; Atkinson, Thomas M; Ahles, Tim A
2016-06-01
Patient-reported cognition generally exhibits poor concordance with objectively assessed cognitive performance. In this article, we introduce latent regression Rasch modeling and provide a step-by-step tutorial for applying Rasch methods as an alternative to traditional correlation to better clarify the relationship of self-report and objective cognitive performance. An example analysis using these methods is also included. Introduction to latent regression Rasch modeling is provided together with a tutorial on implementing it using the JAGS programming language for the Bayesian posterior parameter estimates. In an example analysis, data from a longitudinal neurocognitive outcomes study of 132 breast cancer patients and 45 non-cancer matched controls that included self-report and objective performance measures pre- and post-treatment were analyzed using both conventional and latent regression Rasch model approaches. Consistent with previous research, conventional analysis and correlations between neurocognitive decline and self-reported problems were generally near zero. In contrast, application of latent regression Rasch modeling found statistically reliable associations between objective attention and processing speed measures with self-reported Attention and Memory scores. Latent regression Rasch modeling, together with correlation of specific self-reported cognitive domains with neurocognitive measures, helps to clarify the relationship of self-report with objective performance. While the majority of patients attribute their cognitive difficulties to memory decline, the Rash modeling suggests the importance of processing speed and initial learning. To encourage the use of this method, a step-by-step guide and programming language for implementation is provided. Implications of this method in cognitive outcomes research are discussed. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S; Turkmen, Halit S; Yildiz, Mehmet
2016-09-19
The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson's ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson's ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes.
Li, Yang; Yang, Jianyi
2017-04-24
The prediction of protein-ligand binding affinity has recently been improved remarkably by machine-learning-based scoring functions. For example, using a set of simple descriptors representing the atomic distance counts, the RF-Score improves the Pearson correlation coefficient to about 0.8 on the core set of the PDBbind 2007 database, which is significantly higher than the performance of any conventional scoring function on the same benchmark. A few studies have been made to discuss the performance of machine-learning-based methods, but the reason for this improvement remains unclear. In this study, by systemically controlling the structural and sequence similarity between the training and test proteins of the PDBbind benchmark, we demonstrate that protein structural and sequence similarity makes a significant impact on machine-learning-based methods. After removal of training proteins that are highly similar to the test proteins identified by structure alignment and sequence alignment, machine-learning-based methods trained on the new training sets do not outperform the conventional scoring functions any more. On the contrary, the performance of conventional functions like X-Score is relatively stable no matter what training data are used to fit the weights of its energy terms.
Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S.; Turkmen, Halit S.; Yildiz, Mehmet
2016-01-01
The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson’s ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson’s ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes. PMID:28773901
Reliable enumeration of malaria parasites in thick blood films using digital image analysis.
Frean, John A
2009-09-23
Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.
NASA Astrophysics Data System (ADS)
Yuan, Congcong; Jia, Xiaofeng; Liu, Shishuo; Zhang, Jie
2018-02-01
Accurate characterization of hydraulic fracturing zones is currently becoming increasingly important in production optimization, since hydraulic fracturing may increase the porosity and permeability of the reservoir significantly. Recently, the feasibility of the reverse time migration (RTM) method has been studied for the application in imaging fractures during borehole microseismic monitoring. However, strong low-frequency migration noise, poorly illuminated areas, and the low signal to noise ratio (SNR) data can degrade the imaging results. To improve the quality of the images, we propose a multi-cross-correlation staining algorithm to incorporate into the microseismic reverse time migration for imaging fractures using scattered data. Under the modified RTM method, our results are revealed in two images: one is the improved RTM image using the multi-cross-correlation condition, and the other is an image of the target region using the generalized staining algorithm. The numerical examples show that, compared with the conventional RTM, our method can significantly improve the spatial resolution of images, especially for the image of target region.
Kjaergaard, Thomas; Baudin, Pablo; Bykov, Dmytro; ...
2016-11-16
Here, we present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalabilitymore » of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the “resolution of the identity second-order Moller–Plesset perturbation theory” (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.« less
Motion-compensated speckle tracking via particle filtering
NASA Astrophysics Data System (ADS)
Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu
2015-07-01
Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.
A noninvasive method of examination of the hemostasis system.
Kuznik, B I; Fine, I W; Kaminsky, A V
2011-09-01
We propose a noninvasive method of in vivo examination the hemostasis system based on speckle pattern analysis of coherent light scattering from the skin. We compared the results of measuring basic blood coagulation parameters by conventional invasive and noninvasive methods. A strict correlation was found between the results of measurement of soluble fibrin monomer complexes, international normalized ratio (INR), prothrombin index, and protein C content. The noninvasive method of examination of the hemostatic system enable rough evaluation of the intensity of the intravascular coagulation and correction of the dose of indirect anticoagulants maintaining desired values of INR or prothrombin index.
Acoustic emission strand burning technique for motor burning rate prediction
NASA Technical Reports Server (NTRS)
Christensen, W. N.
1978-01-01
An acoustic emission (AE) method is being used to measure the burning rate of solid propellant strands. This method has a precision of 0.5% and excellent burning rate correlation with both subscale and large rocket motors. The AE procedure burns the sample under water and measures the burning rate from the acoustic output. The acoustic signal provides a continuous readout during testing, which allows complete data analysis rather than the start-stop clockwires used by the conventional method. The AE method helps eliminate such problems as inhibiting the sample, pressure increase and temperature rise, during testing.
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Incoherent coincidence imaging of space objects
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Gu, Guohua
2016-10-01
Incoherent Coincidence Imaging (ICI), which is based on the second or higher order correlation of fluctuating light field, has provided great potentialities with respect to standard conventional imaging. However, the deployment of reference arm limits its practical applications in the detection of space objects. In this article, an optical aperture synthesis with electronically connected single-pixel photo-detectors was proposed to remove the reference arm. The correlation in our proposed method is the second order correlation between the intensity fluctuations observed by any two detectors. With appropriate locations of single-pixel detectors, this second order correlation is simplified to absolute-square Fourier transform of source and the unknown object. We demonstrate the image recovery with the Gerchberg-Saxton-like algorithms and investigate the reconstruction quality of our approach. Numerical experiments has been made to show that both binary and gray-scale objects can be recovered. This proposed method provides an effective approach to promote detection of space objects and perhaps even the exo-planets.
Deng, Jia; Staufenbiel, Sven; Hao, Shilei; Wang, Bochu; Dashevskiy, Andriy; Bodmeier, Roland
2017-06-10
The purpose of this study was to discriminate the release behavior from three differently formulated racecadotril (BCS II) granules and to establish an in vitro-in vivo correlation. Three granule formulations of the lipophilic drug were prepared with equivalent composition but prepared with different manufacturing processes (dry granulation, wet granulation with or without binder). In vitro release of the three granules was investigated using a biphasic dissolution system (phosphate buffer pH6.8 and octanol) and compared to the conventional single phase USP II dissolution test performed under sink and non-sink conditions. In vivo studies with each granule formulation were performed in rats. Interestingly, the granule formulations exhibited pronouncedly different behavior in the different dissolution systems depending on different wetting and dissolution conditions. Single phase USP II dissolution tests lacked discrimination. In contrast, remarkable discrimination between the granule formulations was observed in the octanol phase of biphasic dissolution system with a rank order of release from granules prepared by wet granulation with binder>wet granulation without binder>dry granulation. This release order correlated well with the wettability of these granules. An excellent correlation was also established between in vitro release in the octanol phase of the biphasic test and in vivo data (R 2 =0.999). Compared to conventional dissolution methods, the biphasic method provides great potential to discriminate between only minor formulation and process changes within the same dosage form for poorly soluble drugs. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Horikoshi, Ryo; Kobayashi, Yoji; Kageyama, Hiroshi
2013-01-01
Catalysis with transition-metal complexes is a part of the inorganic chemistry curriculum and a challenging topic for upper-level undergraduate and graduate students. A hands-on teaching aid has been developed for use during conventional lectures to help students understand these catalytic reactions. A unique method of illustrating the…
Shamata, Awatif; Thompson, Tim
2018-05-10
Non-contact three-dimensional (3D) surface scanning has been applied in forensic medicine and has been shown to mitigate shortcoming of traditional documentation methods. The aim of this paper is to assess the efficiency of structured light 3D surface scanning in recording traumatic injuries of live cases in clinical forensic medicine. The work was conducted in Medico-Legal Centre in Benghazi, Libya. A structured light 3D surface scanner and ordinary digital camera with close-up lens were used to record the injuries and to have 3D and two-dimensional (2D) documents of the same traumas. Two different types of comparison were performed. Firstly, the 3D wound documents were compared to 2D documents based on subjective visual assessment. Additionally, 3D wound measurements were compared to conventional measurements and this was done to determine whether there was a statistical significant difference between them. For this, Friedman test was used. The study established that the 3D wound documents had extra features over the 2D documents. Moreover; the 3D scanning method was able to overcome the main deficiencies of the digital photography. No statistically significant difference was found between the 3D and conventional wound measurements. The Spearman's correlation established strong, positive correlation between the 3D and conventional measurement methods. Although, the 3D surface scanning of the injuries of the live subjects faced some difficulties, the 3D results were appreciated, the validity of 3D measurements based on the structured light 3D scanning was established. Further work will be achieved in forensic pathology to scan open injuries with depth information. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
Azzam, Rimon Sobhi; Sallum, Rubens A A; Brandão, Jeovana Ferreira; Navarro-Rodriguez, Tomás; Nasi, Ary
2012-01-01
Esophageal pH monitoring is considered to be the gold standard for the diagnosis of gastroesophageal acid reflux. However, this method is very troublesome and considerably limits the patient's routine activities. Wireless pH monitoring was developed to avoid these restrictions. To compare the first 24 hours of the conventional and wireless pH monitoring, positioned 3 cm above the lower esophageal sphincter, in relation to: the occurrence of relevant technical failures, the ability to detect reflux and the ability to correlate the clinical symptoms to reflux. Twenty-five patients referred for esophageal pH monitoring and with typical symptoms of gastroesophageal reflux disease were studied prospectively, underwent clinical interview, endoscopy, esophageal manometry and were submitted, with a simultaneous initial period, to 24-hour catheter pH monitoring and 48-hour wireless pH monitoring. Early capsule detachment occurred in one (4%) case and there were no technical failures with the catheter pH monitoring (P = 0.463). Percentages of reflux time (total, upright and supine) were higher with the wireless pH monitoring (P < 0.05). Pathological gastroesophageal reflux occurred in 16 (64%) patients submitted to catheter and in 19 (76%) to the capsule (P = 0.355). The symptom index was positive in 12 (48%) patients with catheter pH monitoring and in 13 (52%) with wireless pH monitoring (P = 0.777). 1) No significant differences were reported between the two methods of pH monitoring (capsule vs catheter), in regard to relevant technical failures; 2) Wireless pH monitoring detected higher percentages of reflux time than the conventional pH-metry; 3) The two methods of pH monitoring were comparable in diagnosis of pathological gastroesophageal reflux and comparable in correlating the clinical symptoms with the gastroesophageal reflux.
Tozer, Daniel J; Schmierer, Klaus; Chard, Declan T; Anderson, Valerie M; Altmann, Daniel R; Miller, David H; Wheeler-Kingshott, Claudia AM
2013-01-01
Background: There are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans. Objective: To determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images. Methods: We obtained conventional PDw and T2w images from 10 patients with relapsing–remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML. Results: Our study’s ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively. Conclusion: ADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished. PMID:23037551
Robust X-ray angular correlations for the study of meso-structures
Lhermitte, Julien R.; Tian, Cheng; Stein, Aaron; ...
2017-05-08
As self-assembling nanomaterials become more sophisticated, it is becoming increasingly important to measure the structural order of finite-sized assemblies of nano-objects. These mesoscale clusters represent an acute challenge to conventional structural probes, owing to the range of implicated size scales (10 nm to several micrometres), the weak scattering signal and the dynamic nature of meso-clusters in native solution environments. The high X-ray flux and coherence of modern synchrotrons present an opportunity to extract structural information from these challenging systems, but conventional ensemble X-ray scattering averages out crucial information about local particle configurations. Conversely, a single meso-cluster scatters too weakly tomore » recover the full diffraction pattern. Using X-ray angular cross-correlation analysis, it is possible to combine multiple noisy measurements to obtain robust structural information. This paper explores the key theoretical limits and experimental challenges that constrain the application of these methods to probing structural order in real nanomaterials. A metric is presented to quantify the signal-to-noise ratio of angular correlations, and it is used to identify several experimental artifacts that arise. In particular, it is found that background scattering, data masking and inter-cluster interference profoundly affect the quality of correlation analyses. A robust workflow is demonstrated for mitigating these effects and extracting reliable angular correlations from realistic experimental data.« less
Comparison of Amplitude-Integrated EEG and Conventional EEG in a Cohort of Premature Infants.
Meledin, Irina; Abu Tailakh, Muhammad; Gilat, Shlomo; Yogev, Hagai; Golan, Agneta; Novack, Victor; Shany, Eilon
2017-03-01
To compare amplitude-integrated EEG (aEEG) and conventional EEG (EEG) activity in premature neonates. Biweekly aEEG and EEG were simultaneously recorded in a cohort of infants born less than 34 weeks gestation. aEEG recordings were visually assessed for lower and upper border amplitude and bandwidth. EEG recordings were compressed for visual evaluation of continuity and assessed using a signal processing software for interburst intervals (IBI) and frequencies' amplitude. Ten-minute segments of aEEG and EEG indices were compared using regression analysis. A total of 189 recordings from 67 infants were made, from which 1697 aEEG/EEG pairs of 10-minute segments were assessed. Good concordance was found for visual assessment of continuity between the 2 methods. EEG IBI, alpha and theta frequencies' amplitudes were negatively correlated to the aEEG lower border while conceptional age (CA) was positively correlated to aEEG lower border ( P < .001). IBI and all frequencies' amplitude were positively correlated to the upper aEEG border ( P ≤ .001). CA was negatively correlated to aEEG span while IBI, alpha, beta, and theta frequencies' amplitude were positively correlated to the aEEG span. Important information is retained and integrated in the transformation of premature neonatal EEG to aEEG. aEEG recordings in high-risk premature neonates reflect reliably EEG background information related to continuity and amplitude.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2013-01-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-15
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Wavelet-space Correlation Imaging for High-speed MRI without Motion Monitoring or Data Segmentation
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2014-01-01
Purpose This study aims to 1) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and 2) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Methods Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called “wavelet-space correlation imaging”, is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Results Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Conclusion Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. PMID:25470230
Jakeman, K J; Bird, C R; Thorpe, R; Smith, H; Sweet, C
1991-03-01
Fever in influenza results from the release of endogenous pyrogen (EP) following virus-phagocyte interaction and its level correlates with the differing virulence of virus strains. However, the different levels of fever produced in ferrets by intracardial inoculation of EP obtained from the interaction of different virus strains with ferret of human phagocytes did not correlate with the levels of interleukin 1 (IL-1), IL-6 or tumour necrosis factor in the same samples as assayed by conventional in vitro methods. Hence, the EP produced by influenza virus appears to be different to these cytokines.
Mapping brain activity in gradient-echo functional MRI using principal component analysis
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Singh, Manbir; Don, Manuel
1997-05-01
The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.
NASA Technical Reports Server (NTRS)
Evans, G. F.; Haller, R. G.; Wyrick, P. S.; Parkey, R. W.; Fleckenstein, J. L.; Blomqvist, C. G. (Principal Investigator)
1998-01-01
PURPOSE: To assess correlations between muscle edema on magnetic resonance (MR) images and clinical indexes of muscle injury in delayed-onset muscle soreness (DOMS) produced by submaximal exercise protocols. MATERIALS AND METHODS: Sixteen subjects performed 36 elbow flexions ("biceps curls") at one of two submaximal workloads that emphasized eccentric contractions. Changes in MR imaging findings, plasma levels of creatine kinase, and pain scores were correlated. RESULTS: Both exercise protocols produced DOMS in all subjects. The best correlation was between change in creatine kinase level and volume of muscle edema on MR images, regardless of the workload. Correlations tended to be better with the easier exercise protocol. CONCLUSION: Whereas many previous studies of DOMS focused on intense exercise protocols to ensure positive results, the present investigation showed that submaximal workloads are adequate to produce DOMS and that correlations between conventionally measured indexes of injury may be enhanced at lighter exercise intensities.
Nonlinear dynamic analysis of voices before and after surgical excision of vocal polyps
NASA Astrophysics Data System (ADS)
Zhang, Yu; McGilligan, Clancy; Zhou, Liang; Vig, Mark; Jiang, Jack J.
2004-05-01
Phase space reconstruction, correlation dimension, and second-order entropy, methods from nonlinear dynamics, are used to analyze sustained vowels generated by patients before and after surgical excision of vocal polyps. Two conventional acoustic perturbation parameters, jitter and shimmer, are also employed to analyze voices before and after surgery. Presurgical and postsurgical analyses of jitter, shimmer, correlation dimension, and second-order entropy are statistically compared. Correlation dimension and second-order entropy show a statistically significant decrease after surgery, indicating reduced complexity and higher predictability of postsurgical voice dynamics. There is not a significant postsurgical difference in shimmer, although jitter shows a significant postsurgical decrease. The results suggest that jitter and shimmer should be applied to analyze disordered voices with caution; however, nonlinear dynamic methods may be useful for analyzing abnormal vocal function and quantitatively evaluating the effects of surgical excision of vocal polyps.
Harden, Bradley J; Nichols, Scott R; Frueh, Dominique P
2014-09-24
Nuclear magnetic resonance (NMR) studies of larger proteins are hampered by difficulties in assigning NMR resonances. Human intervention is typically required to identify NMR signals in 3D spectra, and subsequent procedures depend on the accuracy of this so-called peak picking. We present a method that provides sequential connectivities through correlation maps constructed with covariance NMR, bypassing the need for preliminary peak picking. We introduce two novel techniques to minimize false correlations and merge the information from all original 3D spectra. First, we take spectral derivatives prior to performing covariance to emphasize coincident peak maxima. Second, we multiply covariance maps calculated with different 3D spectra to destroy erroneous sequential correlations. The maps are easy to use and can readily be generated from conventional triple-resonance experiments. Advantages of the method are demonstrated on a 37 kDa nonribosomal peptide synthetase domain subject to spectral overlap.
Böker, Sarah M.; Bender, Yvonne Y.; Diederichs, Gerd; Fallenberg, Eva M.; Wagner, Moritz; Hamm, Bernd; Makowski, Marcus R.
2017-01-01
Objectives To determine the diagnostic performance of susceptibility-weighted magnetic resonance imaging (SWMR) for the detection of pineal gland calcifications (PGC) compared to conventional magnetic resonance imaging (MRI) sequences, using computed tomography (CT) as a reference standard. Methods 384 patients who received a 1.5 Tesla MRI scan including SWMR sequences and a CT scan of the brain between January 2014 and October 2016 were retrospectively evaluated. 346 patients were included in the analysis, of which 214 showed PGC on CT scans. To assess correlation between imaging modalities, the maximum calcification diameter was used. Sensitivity and specificity and intra- and interobserver reliability were calculated for SWMR and conventional MRI sequences. Results SWMR reached a sensitivity of 95% (95% CI: 91%-97%) and a specificity of 96% (95% CI: 91%-99%) for the detection of PGC, whereas conventional MRI achieved a sensitivity of 43% (95% CI: 36%-50%) and a specificity of 96% (95% CI: 91%-99%). Detection rates for calcifications in SWMR and conventional MRI differed significantly (95% versus 43%, p<0.001). Diameter measurements between SWMR and CT showed a close correlation (R2 = 0.85, p<0.001) with a slight but not significant overestimation of size (SWMR: 6.5 mm ± 2.5; CT: 5.9 mm ± 2.4, p = 0.02). Interobserver-agreement for diameter measurements was excellent on SWMR (ICC = 0.984, p < 0.0001). Conclusions Combining SWMR magnitude and phase information enables the accurate detection of PGC and offers a better diagnostic performance than conventional MRI with CT as a reference standard. PMID:28278291
Adetoro, O O
1988-06-01
Multiple exposure photography (MEP), an objective technique, was used in determining the percentage of motile sperms in the semen samples from 41 males being investigated for infertility. This technique was compared with the conventional subjective ordinary microscopy method of spermatozoal motility assessment. A satisfactory correlation was observed in percentage sperm motility assessment using the two methods but the MEP estimation was more consistent and reliable. The value of this technique of sperm motility study in the developing world is discussed.
Multi-ball and one-ball geolocation and location verification
NASA Astrophysics Data System (ADS)
Nelson, D. J.; Townsend, J. L.
2017-05-01
We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. Several fundamental problems are addressed, including demodulation of AIS/GMSK signals, verification of the emitter location, accurate frequency and delay estimation and identification of pulse trains from the same emitter. In particular, we present several new correlation methods, including cross-cross correlation that greatly improves correlation accuracy over conventional methods and cross- TDOA and cross-FDOA functions that make it possible to estimate time and frequency delay without the need of computing a two dimensional cross-ambiguity surface. By isolating pulses from the same emitter and accurately tracking the received signal frequency, we are able to accurately estimate the emitter location from the received Doppler characteristics.
Osono, Eiichi; Kobayashi, Eiko; Inoue, Yuki; Honda, Kazumi; Kumagai, Takuya; Negishi, Hideki; Okamatsu, Kentaro; Ichimura, Kyoko; Kamano, Chisako; Suzuki, Fumi; Norose, Yoshihiko; Takahashi, Megumi; Takaku, Shun; Fujioka, Noriaki; Hayama, Naoaki; Takizawa, Hideaki
2014-01-01
A chemiluminescence system, Milliflex Quantum (MFQ), to detect microcolonies, has been used in the pharmaceutical field. In this study, we investigated aquatic bacteria in hemodialysis solutions sampled from bioburden areas in 4 dialysis faculties. Using MFQ, microcolonies could be detected after a short incubation period. The colony count detected with MFQ after a 48-hour incubation was 92% ± 39%, compared to that after the conventionally used 7-14-day incubation period; in addition, the results also showed a linear correlation. Moreover, MFQ-based analysis allowed the visualization of damaged cells and of the high density due to the excessive amount of bacteria. These results suggested that MFQ had adequate sensitivity to detect microbacteria in dialysis solutions, and it was useful for validating the conditions of conventional culture methods.
Hsieh, Chung-Bao; Chen, Chung-Jueng; Chen, Teng-Wei; Yu, Jyh-Cherng; Shen, Kuo-Liang; Chang, Tzu-Ming; Liu, Yao-Chi
2004-01-01
AIM: To investigate whether the non-invasive real-time Indocynine green (ICG) clearance is a sensitive index of liver viability in patients before, during, and after liver transplantation. METHODS: Thirteen patients were studied, two before, three during, and eight following liver transplantation, with two patients suffering acute rejection. The conventional invasive ICG clearance test and ICG pulse spectrophotometry non-invasive real-time ICG clearance test were performed simultaneously. Using linear regression analysis we tested the correlation between these two methods. The transplantation condition of these patients and serum total bilirubin (T. Bil), alanine aminotransferase (ALT), and platelet count were also evaluated. RESULTS: The correlation between these two methods was excellent (r2 = 0.977). CONCLUSION: ICG pulse spectrophotometry clearance is a quick, non-invasive, and reliable liver function test in transplantation patients. PMID:15285026
Flammability Indices for Refrigerants
NASA Astrophysics Data System (ADS)
Kataoka, Osami
This paper introduces a new index to classify flammable refrigerants. A question on flammability indices that ASHRAE employs arose from combustion test results of R152a and ammonia. Conventional methods of not only ASHRAE but also ISO and Japanese High-pressure gas safety law to classify the flammability of refrigerants are evaluated to show why these methods conflict with the test results. The key finding of this paper is that the ratio of stoichiometric concentration to LFL concentration (R factor) represents the test results most precisely. In addition, it has excellent correlation with other flammability parameters such as flame speed and pressure rise coefficient. Classification according to this index gives reasonable flammability order of substances including ammonia, R152a and carbon monoxide. Theoretical background why this index gives good correlation is also discussed as well as the insufficient part of this method.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
Klimowicz, M D; Nizanski, W; Batkowski, F; Savic, M A
2008-07-01
The aim of these experiments was to compare conventional, microscopic methods of evaluating pigeon sperm motility and concentration to those measured by computer-assisted sperm analysis (CASA system). Semen was collected twice a week from two groups of pigeons, each of 40 males (group I: meat-type breed; group II: fancy pigeon) using the lumbo-sacral and cloacal region massage method. Ejaculates collected in each group were diluted 1:100 in BPSE solution and divided into two equal samples. One sample was examined subjectively by microscope and the second one was analysed using CASA system. The sperm concentration was measured by CASA using the anti-collision (AC) system and fluorescent staining (IDENT). There were not any significant differences between the methods of evaluation of sperm concentration. High positive correlations in both groups were observed between the sperm concentration estimated by Thom counting chamber and AC (r=0.87 and r=0.91, respectively), and between the sperm concentration evaluated by Thom counting chamber and IDENT (r=0.85 and r=0.90, respectively). The mean values for CASA measurement of proportion of motile spermatozoa (MOT) and progressive movement (PMOT) were significantly lower than the values estimated subjectively in both groups of pigeons (p< or =0.05 and p< or =0.01, respectively). Positive correlations in MOT and PMOT were noted between both methods of evaluation. The CASA system is very rapid, objective and sensitive method in detecting subtle motility characteristics as well as sperm concentration and is recommended for future research into pigeon semen.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Calibration of Passive Microwave Polarimeters that Use Hybrid Coupler-Based Correlators
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.
2003-01-01
Four calibration algorithms are studied for microwave polarimeters that use hybrid coupler-based correlators: 1) conventional two-look of hot and cold sources, 2) three looks of hot and cold source combinations, 3) two-look with correlated source, and 4) four-look combining methods 2 and 3. The systematic errors are found to depend on the polarimeter component parameters and accuracy of calibration noise temperatures. A case study radiometer in four different remote sensing scenarios was considered in light of these results. Applications for Ocean surface salinity, Ocean surface winds, and soil moisture were found to be sensitive to different systematic errors. Finally, a standard uncertainty analysis was performed on the four-look calibration algorithm, which was found to be most sensitive to the correlated calibration source.
Holland, Tanja; Blessing, Daniel; Hellwig, Stephan; Sack, Markus
2013-10-01
Radio frequency impedance spectroscopy (RFIS) is a robust method for the determination of cell biomass during fermentation. RFIS allows non-invasive in-line monitoring of the passive electrical properties of cells in suspension and can distinguish between living and dead cells based on their distinct behavior in an applied radio frequency field. We used continuous in situ RFIS to monitor batch-cultivated plant suspension cell cultures in stirred-tank bioreactors and compared the in-line data to conventional off-line measurements. RFIS-based analysis was more rapid and more accurate than conventional biomass determination, and was sensitive to changes in cell viability. The higher resolution of the in-line measurement revealed subtle changes in cell growth which were not accessible using conventional methods. Thus, RFIS is well suited for correlating such changes with intracellular states and product accumulation, providing unique opportunities for employing systems biotechnology and process analytical technology approaches to increase product yield and quality. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Conflict and convention in dynamic networks.
Foley, Michael; Forber, Patrick; Smead, Rory; Riedl, Christoph
2018-03-01
An important way to resolve games of conflict (snowdrift, hawk-dove, chicken) involves adopting a convention: a correlated equilibrium that avoids any conflict between aggressive strategies. Dynamic networks allow individuals to resolve conflict via their network connections rather than changing their strategy. Exploring how behavioural strategies coevolve with social networks reveals new dynamics that can help explain the origins and robustness of conventions. Here, we model the emergence of conventions as correlated equilibria in dynamic networks. Our results show that networks have the tendency to break the symmetry between the two conventional solutions in a strongly biased way. Rather than the correlated equilibrium associated with ownership norms (play aggressive at home, not away), we usually see the opposite host-guest norm (play aggressive away, not at home) evolve on dynamic networks, a phenomenon common to human interaction. We also show that learning to avoid conflict can produce realistic network structures in a way different than preferential attachment models. © 2017 The Author(s).
Daniel, Kaemmerer; Maria, Athelogou; Amelie, Lupp; Isabell, Lenhardt; Stefan, Schulz; Luisa, Peter; Merten, Hommann; Vikas, Prasad; Gerd, Binnig; Paul, Baum Richard
2014-01-01
Background: Manual evaluation of somatostatin receptor (SSTR) immunohistochemistry (IHC) is a time-consuming and cost-intensive procedure. Aim of the study was to compare manual evaluation of SSTR subtype IHC to an automated software-based analysis, and to in-vivo imaging by SSTR-based PET/CT. Methods: We examined 25 gastroenteropancreatic neuroendocrine tumor (GEP-NET) patients and correlated their in-vivo SSTR-PET/CT data (determined by the standardized uptake values SUVmax,-mean) with the corresponding ex-vivo IHC data of SSTR subtype (1, 2A, 4, 5) expression. Exactly the same lesions were imaged by PET/CT, resected and analyzed by IHC in each patient. After manual evaluation, the IHC slides were digitized and automatically evaluated for SSTR expression by Definiens XD software. A virtual IHC score “BB1” was created for comparing the manual and automated analysis of SSTR expression. Results: BB1 showed a significant correlation with the corresponding conventionally determined Her2/neu score of the SSTR-subtypes 2A (rs: 0.57), 4 (rs: 0.44) and 5 (rs: 0.43). BB1 of SSTR2A also significantly correlated with the SUVmax (rs: 0.41) and the SUVmean (rs: 0.50). Likewise, a significant correlation was seen between the conventionally evaluated SSTR2A status and the SUVmax (rs: 0.42) and SUVmean (rs: 0.62).Conclusion: Our data demonstrate that the evaluation of the SSTR status by automated analysis (BB1 score), using digitized histopathology slides (“virtual microscopy”), corresponds well with the SSTR2A, 4 and 5 expression as determined by conventional manual histopathology. The BB1 score also exhibited a significant association to the SSTR-PET/CT data in accordance with the high affinity profile of the SSTR analogues used for imaging. PMID:25197368
Gültas, Mehmet; Düzgün, Güncel; Herzog, Sebastian; Jäger, Sven Joachim; Meckbach, Cornelia; Wingender, Edgar; Waack, Stephan
2014-04-03
The identification of functionally or structurally important non-conserved residue sites in protein MSAs is an important challenge for understanding the structural basis and molecular mechanism of protein functions. Despite the rich literature on compensatory mutations as well as sequence conservation analysis for the detection of those important residues, previous methods often rely on classical information-theoretic measures. However, these measures usually do not take into account dis/similarities of amino acids which are likely to be crucial for those residues. In this study, we present a new method, the Quantum Coupled Mutation Finder (QCMF) that incorporates significant dis/similar amino acid pair signals in the prediction of functionally or structurally important sites. The result of this study is twofold. First, using the essential sites of two human proteins, namely epidermal growth factor receptor (EGFR) and glucokinase (GCK), we tested the QCMF-method. The QCMF includes two metrics based on quantum Jensen-Shannon divergence to measure both sequence conservation and compensatory mutations. We found that the QCMF reaches an improved performance in identifying essential sites from MSAs of both proteins with a significantly higher Matthews correlation coefficient (MCC) value in comparison to previous methods. Second, using a data set of 153 proteins, we made a pairwise comparison between QCMF and three conventional methods. This comparison study strongly suggests that QCMF complements the conventional methods for the identification of correlated mutations in MSAs. QCMF utilizes the notion of entanglement, which is a major resource of quantum information, to model significant dissimilar and similar amino acid pair signals in the detection of functionally or structurally important sites. Our results suggest that on the one hand QCMF significantly outperforms the previous method, which mainly focuses on dissimilar amino acid signals, to detect essential sites in proteins. On the other hand, it is complementary to the existing methods for the identification of correlated mutations. The method of QCMF is computationally intensive. To ensure a feasible computation time of the QCMF's algorithm, we leveraged Compute Unified Device Architecture (CUDA).The QCMF server is freely accessible at http://qcmf.informatik.uni-goettingen.de/.
Optical sampling by laser cavity tuning.
Hochrein, Thomas; Wilk, Rafal; Mei, Michael; Holzwarth, Ronald; Krumbholz, Norman; Koch, Martin
2010-01-18
Most time-resolved optical experiments rely either on external mechanical delay lines or on two synchronized femtosecond lasers to achieve a defined temporal delay between two optical pulses. Here, we present a new method which does not require any external delay lines and uses only a single femtosecond laser. It is based on the cross-correlation of an optical pulse with a subsequent pulse from the same laser. Temporal delay between these two pulses is achieved by varying the repetition rate of the laser. We validate the new scheme by a comparison with a cross-correlation measurement carried out with a conventional mechanical delay line.
Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics
2017-04-27
true at both the item and the scale level. 6 Moreover, the correlation between conventional scores and distance scores for these types of scales...have a perfect negative correlation , r = -1.00. From this perspective, conventional and distance scores are completely redundant. Therefore, we argue... correlation between each respondent’s rating profile and the scale key: shape-scores = rx,k. 2. Rating elevation difference, which is computed as the
Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh
2015-09-01
Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients.
Kitamura, Kei-Ichiro; Zhu, Xin; Chen, Wenxi; Nemoto, Tetsu
2010-01-01
The conventional zero-heat-flow thermometer, which measures the deep body temperature from the skin surface, is widely used at present. However, this thermometer requires considerable electricity to power the electric heater that compensates for heat loss from the probe; thus, AC power is indispensable for its use. Therefore, this conventional thermometer is inconvenient for unconstrained monitoring. We have developed a new dual-heat-flux method that can measure the deep body temperature from the skin surface without a heater. Our method is convenient for unconstrained and long-term measurement because the instrument is driven by a battery and its design promotes energy conservation. Its probe consists of dual-heat-flow channels with different thermal resistances, and each heat-flow-channel has a pair of IC sensors attached on its top and bottom. The average deep body temperature measurements taken using both the dual-heat-flux and then the zero-heat-flow thermometers from the foreheads of 17 healthy subjects were 37.08 degrees C and 37.02 degrees C, respectively. In addition, the correlation coefficient between the values obtained by the 2 methods was 0.970 (p<0.001). These results show that our method can be used for monitoring the deep body temperature as accurately as the conventional method, and it overcomes the disadvantage of the necessity of AC power supply. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Lee, Dong Yeon; Seo, Sang Gyo; Kim, Eo Jin; Kim, Sung Ju; Lee, Kyoung Min; Farber, Daniel C; Chung, Chin Youb; Choi, In Ho
2015-01-01
Radiographic examination is a widely used evaluation method in the orthopedic clinic. However, conventional radiography alone does not reflect the dynamic changes between foot and ankle segments during gait. Multiple 3-dimensional multisegment foot models (3D MFMs) have been introduced to evaluate intersegmental motion of the foot. In this study, we evaluated the correlation between static radiographic indices and intersegmental foot motion indices. One hundred twenty-five females were tested. Static radiographs of full-leg and anteroposterior (AP) and lateral foot views were performed. For hindfoot evaluation, we measured the AP tibiotalar angle (TiTA), talar tilt (TT), calcaneal pitch, lateral tibiocalcaneal angle, and lateral talcocalcaneal angle. For the midfoot segment, naviculocuboid overlap and talonavicular coverage angle were calculated. AP and lateral talo-first metatarsal angles and metatarsal stacking angle (MSA) were measured to assess the forefoot. Hallux valgus angle (HVA) and hallux interphalangeal angle were measured. In gait analysis by 3D MFM, intersegmental angle (ISA) measurements of each segment (hallux, forefoot, hindfoot, arch) were recorded. ISAs at midstance phase were most highly correlated with radiography. Significant correlations were observed between ISA measurements using MFM and static radiographic measurements in the same segment. In the hindfoot, coronal plane ISA was correlated with AP TiTA (P < .001) and TT (P = .018). In the hallux, HVA was strongly correlated with transverse ISA of the hallux (P < .001). The segmental foot motion indices at midstance phase during gait measured by 3D MFM gait analysis were correlated with the conventional radiographic indices. The observed correlation between MFM measurements at midstance phase during gait and static radiographic measurements supports the fundamental basis for the use of MFM in analysis of dynamic motion of foot segment during gait. © The Author(s) 2014.
Methods of epigenome editing for probing the function of genomic imprinting.
Rienecker, Kira DA; Hill, Matthew J; Isles, Anthony R
2016-10-01
The curious patterns of imprinted gene expression draw interest from several scientific disciplines to the functional consequences of genomic imprinting. Methods of probing the function of imprinting itself have largely been indirect and correlational, relying heavily on conventional transgenics. Recently, the burgeoning field of epigenome editing has provided new tools and suggested strategies for asking causal questions with site specificity. This perspective article aims to outline how these new methods may be applied to questions of functional imprinting and, with this aim in mind, to suggest new dimensions for the expansion of these epigenome-editing tools.
Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel
2015-06-01
This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves.
NASA Astrophysics Data System (ADS)
Park, J.; Yoo, K.
2013-12-01
For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.
Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.
Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi
2007-09-01
A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.
Application of COLD-PCR for improved detection of KRAS mutations in clinical samples.
Zuo, Zhuang; Chen, Su S; Chandra, Pranil K; Galbincea, John M; Soape, Matthew; Doan, Steven; Barkoh, Bedia A; Koeppen, Hartmut; Medeiros, L Jeffrey; Luthra, Rajyalakshmi
2009-08-01
KRAS mutations have been detected in approximately 30% of all human tumors, and have been shown to predict response to some targeted therapies. The most common KRAS mutation-detection strategy consists of conventional PCR and direct sequencing. This approach has a 10-20% detection sensitivity depending on whether pyrosequencing or Sanger sequencing is used. To improve detection sensitivity, we compared our conventional method with the recently described co-amplification-at-lower denaturation-temperature PCR (COLD-PCR) method, which selectively amplifies minority alleles. In COLD-PCR, the critical denaturation temperature is lowered to 80 degrees C (vs 94 degrees C in conventional PCR). The sensitivity of COLD-PCR was determined by assessing serial dilutions. Fifty clinical samples were used, including 20 fresh bone-marrow aspirate specimens and the formalin-fixed paraffin-embedded (FFPE) tissue of 30 solid tumors. Implementation of COLD-PCR was straightforward and required no additional cost for reagents or instruments. The method was specific and reproducible. COLD-PCR successfully detected mutations in all samples that were positive by conventional PCR, and enhanced the mutant-to-wild-type ratio by >4.74-fold, increasing the mutation detection sensitivity to 1.5%. The enhancement of mutation detection by COLD-PCR inversely correlated with the tumor-cell percentage in a sample. In conclusion, we validated the utility and superior sensitivity of COLD-PCR for detecting KRAS mutations in a variety of hematopoietic and solid tumors using either fresh or fixed, paraffin-embedded tissue.
Woynaroski, Tiffany; Oller, D Kimbrough; Keceli-Kaysili, Bahar; Xu, Dongxin; Richards, Jeffrey A; Gilkerson, Jill; Gray, Sharmistha; Yoder, Paul
2017-03-01
Theory and research suggest that vocal development predicts "useful speech" in preschoolers with autism spectrum disorder (ASD), but conventional methods for measurement of vocal development are costly and time consuming. This longitudinal correlational study examines the reliability and validity of several automated indices of vocalization development relative to an index derived from human coded, conventional communication samples in a sample of preverbal preschoolers with ASD. Automated indices of vocal development were derived using software that is presently "in development" and/or only available for research purposes and using commercially available Language ENvironment Analysis (LENA) software. Indices of vocal development that could be derived using the software available for research purposes: (a) were highly stable with a single day-long audio recording, (b) predicted future spoken vocabulary to a degree that was nonsignificantly different from the index derived from conventional communication samples, and (c) continued to predict future spoken vocabulary even after controlling for concurrent vocabulary in our sample. The score derived from standard LENA software was similarly stable, but was not significantly correlated with future spoken vocabulary. Findings suggest that automated vocal analysis is a valid and reliable alternative to time intensive and expensive conventional communication samples for measurement of vocal development of preverbal preschoolers with ASD in research and clinical practice. Autism Res 2017, 10: 508-519. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Correlation among oxygen vacancies in bismuth titanate ferroelectric ceramics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Wei; Chen Kai; Yao Yangyang
2004-11-15
Pure Bi{sub 4}Ti{sub 3}O{sub 12} ceramics were prepared using the conventional solid-state reaction method and their dielectric properties were investigated. A dielectric loss peak with the relaxation-type characteristic was observed at about 370 K at 100 Hz frequency. This peak was confirmed to be associated with the migration of oxygen vacancies inside ceramics. The Cole-Cole fitting to this peak reveals a strong correlation among oxygen vacancies and this strong correlation is considered to commonly exist among oxygen vacancies in ferroelectrics. Therefore, the migration of oxygen vacancies in ferroelectric materials would demonstrate a collective behavior instead of an individual one duemore » to this strong correlation. Furthermore, this correlation is in proportion to the concentration and in inverse proportion to the activation energy of oxygen vacancies. These results could be helpful to the understanding of the fatigue mechanisms in ferroelectric materials.« less
Newbury, H. John; Possingham, John V.
1977-01-01
Using conventional methods it is impossible to extract RNA as uncomplexed intact molecules from the leaves of grapevines (Vitis vinifera L.) and from a number of woody perennial species that contain high levels of reactive phenolic compounds. A procedure involving the use of high concentrations of the chaotropic agent sodium perchlorate prevents the binding of phenolic compounds to RNA during extraction. Analyses of the phenolics present in plant tissues used in these experiments indicate that there is a poor correlation between the total phenolic content and the complexing of RNA. However, qualitative analyses suggest that proanthocyanidins are involved in the tanning of RNA during conventional extractions. PMID:16660134
Torres, Fernanda Ferrari Esteves; Bosso-Martelo, Roberta; Espir, Camila Galletti; Cirelli, Joni Augusto; Guerreiro-Tanomaru, Juliane Maria; Tanomaru-Filho, Mario
2017-01-01
To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. 7. The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests.
Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M
2018-02-01
Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged <11 years (mean age of 6.9, range 2.5-10 years) with a bruise were recruited from the community. Demographics and bruise details were recorded. Each bruise was measured in vivo using a paper measuring tape. Standardised conventional and cross polarized digital images were obtained. The diameter of bruise images were measured by three computer aided measurement techniques: Image J (segmentation with Simple Interactive Object Extraction (maximum Feret diameter), 'Circular Selection Tool' (Circle diameter), & the Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p < 0.05 for all tests. Images were available for 38 bruises in vivo, with 48 bruises visible on cross polarized imaging and 46 on conventional imaging (some bruises interpreted as being single in vivo appeared to be multiple in digital images). Correlation coefficients were >0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Relative Displacement Method for Track-Structure Interaction
Ramos, Óscar Ramón; Pantaleón, Marcos J.
2014-01-01
The track-structure interaction effects are usually analysed with conventional FEM programs, where it is difficult to implement the complex track-structure connection behaviour, which is nonlinear, elastic-plastic and depends on the vertical load. The authors developed an alternative analysis method, which they call the relative displacement method. It is based on the calculation of deformation states in single DOF element models that satisfy the boundary conditions. For its solution, an iterative optimisation algorithm is used. This method can be implemented in any programming language or analysis software. A comparison with ABAQUS calculations shows a very good result correlation and compliance with the standard's specifications. PMID:24634610
Clifton, Eric H.; Jaronski, Stefan T.; Hodgson, Erin W.; Gassmann, Aaron J.
2015-01-01
Entomopathogenic fungi (EPF) are widespread in agricultural fields and help suppress crop pests. These natural enemies may be hindered by certain agronomic practices associated with conventional agriculture including the use of pesticides. We tested whether the abundance of EPF differed between organic and conventional fields, and whether specific cropping practices and soil properties were correlated with their abundance. In one year of the survey, soil from organic fields and accompanying margins had significantly more EPF than conventional fields and accompanying margins. Regression analysis revealed that the percentage of silt and the application of organic fertilizer were positively correlated with EPF abundance; but nitrogen concentration, tillage, conventional fields, and margins of conventional fields were negatively correlated with EPF abundance. A greenhouse experiment in which fungicides and herbicides were applied to the soil surface showed no significant effect on EPF. Though organic fields were perceived to be more suitable environments for EPF, abiotic factors and cropping practices such as tillage may have greater impacts on the abundance of EPF. Also, fungicides and herbicides may not be as toxic to soil-borne EPF as originally thought. PMID:26191815
Clifton, Eric H; Jaronski, Stefan T; Hodgson, Erin W; Gassmann, Aaron J
2015-01-01
Entomopathogenic fungi (EPF) are widespread in agricultural fields and help suppress crop pests. These natural enemies may be hindered by certain agronomic practices associated with conventional agriculture including the use of pesticides. We tested whether the abundance of EPF differed between organic and conventional fields, and whether specific cropping practices and soil properties were correlated with their abundance. In one year of the survey, soil from organic fields and accompanying margins had significantly more EPF than conventional fields and accompanying margins. Regression analysis revealed that the percentage of silt and the application of organic fertilizer were positively correlated with EPF abundance; but nitrogen concentration, tillage, conventional fields, and margins of conventional fields were negatively correlated with EPF abundance. A greenhouse experiment in which fungicides and herbicides were applied to the soil surface showed no significant effect on EPF. Though organic fields were perceived to be more suitable environments for EPF, abiotic factors and cropping practices such as tillage may have greater impacts on the abundance of EPF. Also, fungicides and herbicides may not be as toxic to soil-borne EPF as originally thought.
Bullying and cyberbullying among deaf students and their hearing peers: an exploratory study.
Bauman, Sheri; Pero, Heather
2011-01-01
A questionnaire on bullying and cyberbullying was administered to 30 secondary students (Grades 7-12) in a charter school for the Deaf and hard of hearing and a matched group of 22 hearing students in a charter secondary school on the same campus. Because the sample size was small and distributions non-normal, results are primarily descriptive and correlational. No significant differences by hearing status were detected in rates of conventional or cyberbullying or both forms of victimization. Cyberbullying and cybervictimization were strongly correlated, as were conventional bullying and victimization. Moral disengagement was positively correlated only with conventional bullying. Implications for practice and future research are discussed.
Method and apparatus for diagnosis of lead toxicity
Rosen, John F.; Slatkin, Daniel N.; Wielopolski, Lucian
1989-01-01
Improved methods and apparatus for in vivo measurement of the skeletal lead burden of a patient and for diagnosis of lead toxicity are disclosed. The apparatus comprises an x-ray tube emitting soft low energy x-rays from a silver anode, a polarizer for polarizing the emitted x-rays, and a detector for detecting photons fluoresced from atoms in the patient's tibia upon irradiation by the polarized x-rays. The fluoresced photons are spectrally analyzed to determine their energy distribution. Peaks indicating the presence of lead are identified if the patient has relatively high bone lead content. The data may be compared to data recorded with respect to a similar test performed on patients having also had the conventional EDTA chelation tests performed thereon in order to correlate the test results with respect to a particular patient to the conventionally accepted EDTA chelation test.
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
Asayama, Kei; Staessen, Jan A; Hayashi, Katsuhisa; Hosaka, Miki; Tatsuta, Nozomi; Kurokawa, Naoyuki; Satoh, Michihiro; Hashimoto, Takanao; Hirose, Takuo; Obara, Taku; Metoki, Hirohito; Inoue, Ryusuke; Kikuya, Masahiro; Ohkubo, Takayoshi; Nakai, Kunihiko; Imai, Yutaka; Satoh, Hiroshi
2012-08-01
Few studies described the home blood pressure (HBP) in young children. Using intrafamilial correlations of blood pressure as research focus, we assessed the feasibility of HBP monitoring in this age group. We enrolled 382 mothers (mean age 38.8 years) and singletons (7.0 years) in theTohoku Study of Child Development.We measured their conventional blood pressure (CBP; single reading) at an examination centre. Participants monitored HBP in the morning. We used the OMRON HEM-70801C for CBP and HBP measurement. In a separate group of 84 children (mean age 7.7 years), we compared blood pressure readings obtained by the OMRON monitor and the Dinamap Pro 100, a device approved by FDA for use in children. We used correlation coefficients as measure of intrafamilial aggregation, while accounting for the mothers' age, body mass index, heart rate and smoking and drinking habits and the children's age, height, and heart rate. Mother-offspring correlations were closer (P < or = 0.003) for HBP than CBP for systolic pressure [0.28 (P < 0.0001) vs 0.06 (P = 0.26)] and diastolic pressure [0.28 (P < 0.0001) vs 0.02 (P = 0.65)].The between-device differences (OMRON minus Dinamap) averaged 7.8 +/- 6.0 mmHg systolic and 5.8 +/- 5.5 mmHg diastolic. HBP monitoring is an easily applicable method to assess intrafamilial blood pressure aggregation in young children and outperforms CBP. Validation protocols for HBP devices in young children need revision, because the Korotkoff method is not practicable at this age and there is no agreed alternative reference method.
Köhn, Andreas
2010-11-07
The coupled-cluster singles and doubles method augmented with single Slater-type correlation factors (CCSD-F12) determined by the cusp conditions (also denoted as SP ansatz) yields results close to the basis set limit with only small overhead compared to conventional CCSD. Quantitative calculations on many-electron systems, however, require to include the effect of connected triple excitations at least. In this contribution, the recently proposed [A. Köhn, J. Chem. Phys. 130, 131101 (2009)] extended SP ansatz and its application to the noniterative triples correction CCSD(T) is reviewed. The approach allows to include explicit correlation into connected triple excitations without introducing additional unknown parameters. The explicit expressions are presented and analyzed, and possible simplifications to arrive at a computationally efficient scheme are suggested. Numerical tests based on an implementation obtained by an automated approach are presented. Using a partial wave expansion for the neon atom, we can show that the proposed ansatz indeed leads to the expected (L(max)+1)(-7) convergence of the noniterative triples correction, where L(max) is the maximum angular momentum in the orbital expansion. Further results are reported for a test set of 29 molecules, employing Peterson's F12-optimized basis sets. We find that the customary approach of using the conventional noniterative triples correction on top of a CCSD-F12 calculation leads to significant basis set errors. This, however, is not always directly visible for total CCSD(T) energies due to fortuitous error compensation. The new approach offers a thoroughly explicitly correlated CCSD(T)-F12 method with improved basis set convergence of the triples contributions to both total and relative energies.
Ben-Haim, Simona; Kacperski, Krzysztof; Hain, Sharon; Van Gramberg, Dean; Hutton, Brian F; Erlandsson, Kjell; Sharir, Tali; Roth, Nathaniel; Waddington, Wendy A; Berman, Daniel S; Ell, Peter J
2010-08-01
We compared simultaneous dual-radionuclide (DR) stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Of 27 consecutive patients recruited, 24 (64.5+/-11.8 years of age, 16 men) were injected with 74 MBq of (201)Tl (rest) and 250 MBq (99m)Tc-MIBI (stress). Conventional MPI acquisition times for stress and rest are 21 min and 16 min, respectively. Rest (201)Tl for 6 min and simultaneous DR 15-min list mode gated scans were performed on a D-SPECT cardiac scanner. In 11 patients DR D-SPECT was performed first and in 13 patients conventional stress (99m)Tc-MIBI SPECT imaging was performed followed by DR D-SPECT. The DR D-SPECT data were processed using a spill-over and scatter correction method. DR D-SPECT images were compared with rest (201)Tl D-SPECT and with conventional SPECT images by visual analysis employing the 17-segment model and a five-point scale (0 normal, 4 absent) to calculate the summed stress and rest scores. Image quality was assessed on a four-point scale (1 poor, 4 very good) and gut activity was assessed on a four-point scale (0 none, 3 high). Conventional MPI studies were abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI showed 113 abnormal segments and conventional MPI showed 93 abnormal segments. In the nine abnormal rest studies DR D-SPECT showed 45 abnormal segments and conventional MPI showed 48 abnormal segments. The summed stress and rest scores on conventional SPECT and DR D-SPECT were highly correlated (r=0.9790 and 0.9694, respectively). The summed scores of rest (201)Tl D-SPECT and DR-DSPECT were also highly correlated (r=0.9968, p<0.0001 for all). In six patients stress perfusion defects were significantly larger on stress DR D-SPECT images, and five of these patients were imaged earlier by D-SPECT than by conventional SPECT. Fast and high-quality simultaneous DR MPI is feasible with D-SPECT in a single imaging session with comparable diagnostic performance and image quality to conventional SPECT and to a separate rest (201)Tl D-SPECT acquisition.
Periodontal inflamed surface area as a novel numerical variable describing periodontal conditions
2017-01-01
Purpose A novel index, the periodontal inflamed surface area (PISA), represents the sum of the periodontal pocket depth of bleeding on probing (BOP)-positive sites. In the present study, we evaluated correlations between PISA and periodontal classifications, and examined PISA as an index integrating the discrete conventional periodontal indexes. Methods This study was a cross-sectional subgroup analysis of data from a prospective cohort study investigating the association between chronic periodontitis and the clinical features of ankylosing spondylitis. Data from 84 patients without systemic diseases (the control group in the previous study) were analyzed in the present study. Results PISA values were positively correlated with conventional periodontal classifications (Spearman correlation coefficient=0.52; P<0.01) and with periodontal indexes, such as BOP and the plaque index (PI) (r=0.94; P<0.01 and r=0.60; P<0.01, respectively; Pearson correlation test). Porphyromonas gingivalis (P. gingivalis) expression and the presence of serum P. gingivalis antibodies were significant factors affecting PISA values in a simple linear regression analysis, together with periodontal classification, PI, bleeding index, and smoking, but not in the multivariate analysis. In the multivariate linear regression analysis, PISA values were positively correlated with the quantity of current smoking, PI, and severity of periodontal disease. Conclusions PISA integrates multiple periodontal indexes, such as probing pocket depth, BOP, and PI into a numerical variable. PISA is advantageous for quantifying periodontal inflammation and plaque accumulation. PMID:29093989
Teschoviruses as Indicators of Porcine Fecal Contamination of Surface Water
Jiménez-Clavero, Miguel Angel; Fernández, Carlos; Ortiz, José Antonio; Pro, Javier; Carbonell, Gregoria; Tarazona, José Vicente; Roblas, Neftalí; Ley, Victoria
2003-01-01
Teschoviruses specifically infect pigs and are shed in pig feces. Hence, their presence in water should indicate contamination with pig fecal residues. To assess this hypothesis, we have developed a real-time reverse transcriptase PCR (RT-PCR) method that allows the quantitative detection of pig teschovirus (PTV) RNA. The method is able to detect 92 fg of PTV RNA per ml of sample. Using this method, we have detected the presence of PTV RNA in water and fecal samples from all pig farms examined (n = 5). Feces from other animal species (cattle, sheep, and goats) were negative in this test. To compare the PTV RNA detection method with conventional chemical determinations currently in use for evaluation of water contamination, we analyzed water samples collected downstream from a pig slurry spillage site. We have found a positive correlation within both types of determinations. The sensitivity of the PTV detection assay was similar to that achieved by unspecific organic matter determination and superior to all other conventional chemical analyses performed. Furthermore, the new method is highly specific, revealing the porcine origin of the contamination, a feature that is lacking in currently available methods for the assessment of water contamination. PMID:14532098
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
Remote sensing of Myriophyllum spicatum L. in a shallow, eutrophic lake
NASA Technical Reports Server (NTRS)
Gustafson, T. D.; Adams, M. S.
1973-01-01
An aerial 35 mm system was used for the acquisition of vertical color and color infrared imagery of the submergent aquatic macrophytes of Lake Wingra, Wisconsin. A method of photographic interpretation of stem density classes is tested for its ability to make standing crop biomass estimates of Myriophyllum spicatum. The results of film image density analysis are significantly correlated with stem densities and standing crop biomass of Myriophyllum and with the biomass of Oedogonium mats. Photographic methods are contrasted with conventional harvest procedures for efficiency and accuracy.
A new modelling approach for zooplankton behaviour
NASA Astrophysics Data System (ADS)
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Updates on Force Limiting Improvements
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Scharton, Terry
2013-01-01
The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.
Johnson, Quentin R; Lindsay, Richard J; Shen, Tongye
2018-02-21
A computational method which extracts the dominant motions from an ensemble of biomolecular conformations via a correlation analysis of residue-residue contacts is presented. The algorithm first renders the structural information into contact matrices, then constructs the collective modes based on the correlated dynamics of a selected set of dynamic contacts. Associated programs can bridge the results for further visualization using graphics software. The aim of this method is to provide an analysis of conformations of biopolymers from the contact viewpoint. It may assist a systematical uncovering of conformational switching mechanisms existing in proteins and biopolymer systems in general by statistical analysis of simulation snapshots. In contrast to conventional correlation analyses of Cartesian coordinates (such as distance covariance analysis and Cartesian principal component analysis), this program also provides an alternative way to locate essential collective motions in general. Herein, we detail the algorithm in a stepwise manner and comment on the importance of the method as applied to decoding allosteric mechanisms. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Complex Correlation Calculation of e-H Total Cross Sections
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Calculation of e-H total and elastic partial wave cross sections is being carried out using the complex correlation variational T-matrix method. In this preliminary study, elastic partial wave phase shifts are calculated with the correlation functions which are confined to be real. In that case the method reduces to the conventional optical potential approach with projection operators. The number of terms in the Hylleraas-type wave function for the S phase shifts is 95 while for the S it is 56, except for k=0.8 where it is 84. Our results, which are rigorous lower bounds, are given. They are seen to be in general agreement with those of Schwartz, but they are of 0 greater accuracy and outside of his error limits for k=0.3 and 0.4 for S. The main aim of this approach' is the application to higher energy scattering. By virtue of the complex correlation functions, the T matrix is not unitary so that elastic and total scattering cross sections are independent of each other. Our results will be compared specifically with those of Bray and Stelbovics.
Complex Correlation Calculation of e(-) - H Total Cross Sections
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Temkin, A.; Fisher, Richard R. (Technical Monitor)
2001-01-01
Calculation of e(-) - H total and elastic partial wave cross sections is being carried out using the complex correlation variational T-matrix method. In this preliminary study, elastic partial wave phase shifts are calculated with the correlation functions which are confined to be real. In that case the method reduces to the conventional optical potential approach with 2 projection operators. The number of terms in the Hylleraas-type wave function for the S-1 phase shifts is 95 while for the S-3 it is 56, except for k = 0.8 where it is 84. Our results, which are rigorous lower bounds, are seen to be in general agreement with those of Schwartz, but they are of greater accuracy and outside of his error limits for k = 0.3 and 0.4 for S-1. The main aim of this approach is the application to higher energy scattering. By virtue of the complex correlation functions, the T-matrix is not unitary so that elastic and total scattering cross sections are independent of each other. Our results will be compared specifically with those of Bray and Stelbovics.
Hierarchical multivariate covariance analysis of metabolic connectivity
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-01-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI). PMID:25294129
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
NASA Astrophysics Data System (ADS)
Tiecher, Tales; Caner, Laurent; Gomes Minella, Jean Paolo; Henrique Ciotti, Lucas; Antônio Bender, Marcos; dos Santos Rheinheimer, Danilo
2014-05-01
Conventional fingerprinting methods based on geochemical composition still require a time-consuming and critical preliminary sample preparation. Thus, fingerprinting characteristics that can be measured in a rapid and cheap way requiring a minimal sample preparation, such as spectroscopy methods, should be used. The present study aimed to evaluate the sediment sources contribution in a rural catchment by using conventional method based on geochemical composition and on an alternative method based on near-infrared spectroscopy. This study was carried out in a rural catchment with an area of 1,19 km2 located in southern Brazil. The sediment sources evaluated were crop fields (n=20), unpaved roads (n=10) and stream channels (n=10). Thirty suspended sediment samples were collected from eight significant storm runoff events between 2009 and 2011. Sources and sediment samples were dried at 50oC and sieved at 63 µm. The total concentration of Ag, As, B, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, K, La, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, Tl, V and Zn were estimated by ICP-OES after microwave assisted digestion with concentrated HNO3 and HCl. Total organic carbon (TOC) was estimated by wet oxidation with K2Cr2O7 and H2SO4. The near-infrared spectra scan range was 4000 to 10000 cm-1 at a resolution of 2 cm-1, with 100 co added scans per spectrum. The steps used in the conventional method were: i) tracer selection based on Kruskal-Wallis test, ii) selection of the best set of tracers using discriminant analyses and finally iii) the use of a mixed linear model to calculate the sediment sources contribution. The steps used in the alternative method were i) principal component analyses to reduce the number of variables, ii) discriminant analyses to determine the tracer potential of the near-infrared spectroscopy, and finally iii) the use of past least square based on 48 mixtures of the sediment sources in various weight proportions to calculate the sediment sources contribution. Both conventional and alternative methods were capable to discriminate 100% of the sediment sources. Conventional fingerprinting method provided a sediment sources contribution of 33±19% by crop fields, 25±13% by unpaved roads and 42±19% by stream channels. The contribution of sediment sources obtained by alternative fingerprinting method using near-infrared spectroscopy was 71±22% of crop fields, 21±12% of unpaved roads and 14±19% of stream channels. No correlation was observed between source contribution assessed by the two methods. Notwithstanding, the average contribution of the unpaved roads was similar by both methods. The highest difference in the average contribution of crop fields and stream channels estimated by the two methods was due to similar organic matter content of these two sediment sources which hampers their discrimination by assessing the near-infrared spectra, where much of the bands are highly correlated with the TOC levels. Efforts should be taken to try to combine both the geochemical composition and near-infrared spectroscopy information on a single estimative of the sediment sources contribution.
Ramli, Nurul Shazini; Ismail, Patimah; Rahmat, Asmah
2014-01-01
The aim of this study was to examine the effects of extraction methods on antioxidant capacities of red dragon fruit peel and flesh. Antioxidant capacities were measured using ethylenebenzothiozoline-6-sulfonic acid (ABTS) radical cation assay and ferric reducing antioxidant power assay (FRAP). Total phenolic content (TPC) was determined using Folin-Ciocalteu reagent while quantitative determination of total flavonoid content (TFC) was conducted using aluminium trichloride colorimetric method. Betacyanin content (BC) was measured by spectrophotometer. Red dragon fruit was extracted using conventional (CV) and ultrasonic-assisted extraction (UE) technique to determine the most efficient way of extracting its antioxidant components. Results indicated that UE increased TFC, reduced the extraction yield, BC, and TPC, but exhibited the strongest scavenging activity for the peel of red dragon fruit. In contrast, UE reduced BC, TFC, and scavenging activity but increased the yield for the flesh. Nonetheless, UE slightly increases TPC in flesh. Scavenging activity and reducing power were highly correlated with phenolic and flavonoid compounds. Conversely, the scavenging activity and reducing power were weakly correlated with betacyanin content. This work gives scientific evidences for the consideration of the type of extraction techniques for the peel and flesh of red dragon fruit in applied research and food industry.
Ramli, Nurul Shazini; Ismail, Patimah; Rahmat, Asmah
2014-01-01
The aim of this study was to examine the effects of extraction methods on antioxidant capacities of red dragon fruit peel and flesh. Antioxidant capacities were measured using ethylenebenzothiozoline-6-sulfonic acid (ABTS) radical cation assay and ferric reducing antioxidant power assay (FRAP). Total phenolic content (TPC) was determined using Folin-Ciocalteu reagent while quantitative determination of total flavonoid content (TFC) was conducted using aluminium trichloride colorimetric method. Betacyanin content (BC) was measured by spectrophotometer. Red dragon fruit was extracted using conventional (CV) and ultrasonic-assisted extraction (UE) technique to determine the most efficient way of extracting its antioxidant components. Results indicated that UE increased TFC, reduced the extraction yield, BC, and TPC, but exhibited the strongest scavenging activity for the peel of red dragon fruit. In contrast, UE reduced BC, TFC, and scavenging activity but increased the yield for the flesh. Nonetheless, UE slightly increases TPC in flesh. Scavenging activity and reducing power were highly correlated with phenolic and flavonoid compounds. Conversely, the scavenging activity and reducing power were weakly correlated with betacyanin content. This work gives scientific evidences for the consideration of the type of extraction techniques for the peel and flesh of red dragon fruit in applied research and food industry. PMID:25379555
A Novel Two-Step Hierarchial Quantitative Structure-Activity ...
Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET ; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments) . The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models t
DMirNet: Inferring direct microRNA-mRNA association networks.
Lee, Minsu; Lee, HyungJune
2016-12-05
MicroRNAs (miRNAs) play important regulatory roles in the wide range of biological processes by inducing target mRNA degradation or translational repression. Based on the correlation between expression profiles of a miRNA and its target mRNA, various computational methods have previously been proposed to identify miRNA-mRNA association networks by incorporating the matched miRNA and mRNA expression profiles. However, there remain three major issues to be resolved in the conventional computation approaches for inferring miRNA-mRNA association networks from expression profiles. 1) Inferred correlations from the observed expression profiles using conventional correlation-based methods include numerous erroneous links or over-estimated edge weight due to the transitive information flow among direct associations. 2) Due to the high-dimension-low-sample-size problem on the microarray dataset, it is difficult to obtain an accurate and reliable estimate of the empirical correlations between all pairs of expression profiles. 3) Because the previously proposed computational methods usually suffer from varying performance across different datasets, a more reliable model that guarantees optimal or suboptimal performance across different datasets is highly needed. In this paper, we present DMirNet, a new framework for identifying direct miRNA-mRNA association networks. To tackle the aforementioned issues, DMirNet incorporates 1) three direct correlation estimation methods (namely Corpcor, SPACE, Network deconvolution) to infer direct miRNA-mRNA association networks, 2) the bootstrapping method to fully utilize insufficient training expression profiles, and 3) a rank-based Ensemble aggregation to build a reliable and robust model across different datasets. Our empirical experiments on three datasets demonstrate the combinatorial effects of necessary components in DMirNet. Additional performance comparison experiments show that DMirNet outperforms the state-of-the-art Ensemble-based model [1] which has shown the best performance across the same three datasets, with a factor of up to 1.29. Further, we identify 43 putative novel multi-cancer-related miRNA-mRNA association relationships from an inferred Top 1000 direct miRNA-mRNA association network. We believe that DMirNet is a promising method to identify novel direct miRNA-mRNA relations and to elucidate the direct miRNA-mRNA association networks. Since DMirNet infers direct relationships from the observed data, DMirNet can contribute to reconstructing various direct regulatory pathways, including, but not limited to, the direct miRNA-mRNA association networks.
Gaussian graphical modeling reveals specific lipid correlations in glioblastoma cells
NASA Astrophysics Data System (ADS)
Mueller, Nikola S.; Krumsiek, Jan; Theis, Fabian J.; Böhm, Christian; Meyer-Bäse, Anke
2011-06-01
Advances in high-throughput measurements of biological specimens necessitate the development of biologically driven computational techniques. To understand the molecular level of many human diseases, such as cancer, lipid quantifications have been shown to offer an excellent opportunity to reveal disease-specific regulations. The data analysis of the cell lipidome, however, remains a challenging task and cannot be accomplished solely based on intuitive reasoning. We have developed a method to identify a lipid correlation network which is entirely disease-specific. A powerful method to correlate experimentally measured lipid levels across the various samples is a Gaussian Graphical Model (GGM), which is based on partial correlation coefficients. In contrast to regular Pearson correlations, partial correlations aim to identify only direct correlations while eliminating indirect associations. Conventional GGM calculations on the entire dataset can, however, not provide information on whether a correlation is truly disease-specific with respect to the disease samples and not a correlation of control samples. Thus, we implemented a novel differential GGM approach unraveling only the disease-specific correlations, and applied it to the lipidome of immortal Glioblastoma tumor cells. A large set of lipid species were measured by mass spectrometry in order to evaluate lipid remodeling as a result to a combination of perturbation of cells inducing programmed cell death, while the other perturbations served solely as biological controls. With the differential GGM, we were able to reveal Glioblastoma-specific lipid correlations to advance biomedical research on novel gene therapies.
A proposed simple method for measurement in the anterior chamber angle: biometric gonioscopy.
Congdon, N G; Spaeth, G L; Augsburger, J; Klancnik, J; Patel, K; Hunter, D G
1999-11-01
To design a system of gonioscopy that will allow greater interobserver reliability and more clearly defined screening cutoffs for angle closure than current systems while being simple to teach and technologically appropriate for use in rural Asia, where the prevalence of angle-closure glaucoma is highest. Clinic-based validation and interobserver reliability trial. Study 1: 21 patients 18 years of age and older recruited from a university-based specialty glaucoma clinic; study 2: 32 patients 18 years of age and older recruited from the same clinic. In study 1, all participants underwent conventional gonioscopy by an experienced observer (GLS) using the Spaeth system and in the same eye also underwent Scheimpflug photography, ultrasonographic measurement of anterior chamber depth and axial length, automatic refraction, and biometric gonioscopy with measurement of the distance from iris insertion to Schwalbe's line using a reticule based in the slit-lamp ocular. In study 2, all participants underwent both conventional gonioscopy and biometric gonioscopy by an experienced gonioscopist (NGC) and a medical student with no previous training in gonioscopy (JK). Study 1: The association between biometric gonioscopy and conventional gonioscopy, Scheimpflug photography, and other factors known to correlate with the configuration of the angle. Study 2: Interobserver agreement using biometric gonioscopy compared to that obtained with conventional gonioscopy. In study 1, there was an independent, monotonic, statistically significant relationship between biometric gonioscopy and both Spaeth angle (P = 0.001, t test) and Spaeth insertion (P = 0.008, t test) grades. Biometric gonioscopy correctly identified six of six patients with occludable angles according to Spaeth criteria. Biometric gonioscopic grade was also significantly associated with the anterior chamber angle as measured by Scheimpflug photography (P = 0.005, t test). In study 2, the intraclass correlation coefficient between graders for biometric gonioscopy (0.97) was higher than for Spaeth angle grade (0.72) or Spaeth insertion grade (0.84). Biometric gonioscopy correlates well with other measures of the anterior chamber angle, shows a higher degree of interobserver reliability than conventional gonioscopy, and can readily be learned by an inexperienced observer.
A novel multi-target regression framework for time-series prediction of drug efficacy.
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-18
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.
A novel multi-target regression framework for time-series prediction of drug efficacy
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-01
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186
NASA Astrophysics Data System (ADS)
Fischer, J.; Doolan, C.
2017-12-01
A method to improve the quality of acoustic beamforming in reverberant environments is proposed in this paper. The processing is based on a filtering of the cross-correlation matrix of the microphone signals obtained using a microphone array. The main advantage of the proposed method is that it does not require information about the geometry of the reverberant environment and thus it can be applied to any configuration. The method is applied to the particular example of aeroacoustic testing in a hard-walled low-speed wind tunnel; however, the technique can be used in any reverberant environment. Two test cases demonstrate the technique. The first uses a speaker placed in the hard-walled working section with no wind tunnel flow. In the second test case, an airfoil is placed in a flow and acoustic beamforming maps are obtained. The acoustic maps have been improved, as the reflections observed in the conventional maps have been removed after application of the proposed method.
Quantum imaging with incoherently scattered light from a free-electron laser
NASA Astrophysics Data System (ADS)
Schneider, Raimund; Mehringer, Thomas; Mercurio, Giuseppe; Wenthaus, Lukas; Classen, Anton; Brenner, Günter; Gorobtsov, Oleg; Benz, Adrian; Bhatti, Daniel; Bocklage, Lars; Fischer, Birgit; Lazarev, Sergey; Obukhov, Yuri; Schlage, Kai; Skopintsev, Petr; Wagner, Jochen; Waldmann, Felix; Willing, Svenja; Zaluzhnyy, Ivan; Wurth, Wilfried; Vartanyants, Ivan A.; Röhlsberger, Ralf; von Zanthier, Joachim
2018-02-01
The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. Here we show that higher-order degrees of coherence, displayed in the intensity correlations of incoherently scattered X-rays from an FEL, can be used to image two-dimensional objects with a spatial resolution close to or even below the Abbe limit. This constitutes a new approach towards structure determination based on incoherent processes, including fluorescence emission or wavefront distortions, generally considered detrimental for imaging applications. Our method is an extension of the landmark intensity correlation measurements of Hanbury Brown and Twiss to higher than second order, paving the way towards determination of structure and dynamics of matter in regimes where coherent imaging methods have intrinsic limitations.
Chatzaraki, Vasiliki; Thali, Michael J; Ampanozi, Garyfalia; Schweitzer, Wolf
2018-06-01
Fatal car-to-pedestrian collisions regularly appear in the forensic pathologist's routine, particularly in places of extended urbanization. Postmortem computed tomography has gained an exceptional role to supplement autopsy worldwide, giving information that is supplementary or complimentary to conventional autopsy. In this retrospective study, a total number of 320 findings in a series of 21 pedestrians fatally hit by cars and trucks of both postmortem computed tomography and autopsy were correlated. According to our results, it is best to combine both methods to give well-founded answers to questions pertaining to both collision reconstruction and cause of death.
Opoku-Duah, S.; Donoghue, D.N.M.; Burt, T. P.
2008-01-01
This paper compares evapotranspiration estimates from two complementary satellite sensors – NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and ESA's ENVISAT Advanced Along-Track Scanning Radiometer (AATSR) over the savannah area of the Volta basin in West Africa. This was achieved through solving for evapotranspiration on the basis of the regional energy balance equation, which was computationally-driven by the Surface Energy Balance Algorithm for Land algorithm (SEBAL). The results showed that both sensors are potentially good sources of evapotranspiration estimates over large heterogeneous landscapes. The MODIS sensor measured daily evapotranspiration reasonably well with a strong spatial correlation (R2=0.71) with Landsat ETM+ but underperformed with deviations up to ∼2.0 mm day-1, when compared with local eddy correlation observations and the Penman-Monteith method mainly because of scale mismatch. The AATSR sensor produced much poorer correlations (R2=0.13) with Landsat ETM+ and conventional ET methods also because of differences in atmospheric correction and sensor calibration over land. PMID:27879847
Ultrasonography with color Doppler and power Doppler in the diagnosis of periapical lesions
Goel, Sumit; Nagendrareddy, Suma Gundareddy; Raju, Manthena Srinivasa; Krishnojirao, Dayashankara Rao Jingade; Rastogi, Rajul; Mohan, Ravi Prakash Sasankoti; Gupta, Swati
2011-01-01
Aim: To evaluate the efficacy of ultrasonography (USG) with color Doppler and power Doppler applications over conventional radiography in the diagnosis of periapical lesions. Materials and Methods: Thirty patients having inflammatory periapical lesions of the maxillary or mandibular anterior teeth and requiring endodontic surgery were selected for inclusion in this study. All patients consented to participate in the study. We used conventional periapical radiographs as well as USG with color Doppler and power Doppler for the diagnosis of these lesions. Their diagnostic performances were compared against histopathologic examination. All data were compared and statistically analyzed. Results: USG examination with color Doppler and power Doppler identified 29 (19 cysts and 10 granulomas) of 30 periapical lesions accurately, with a sensitivity of 100% for cysts and 90.91% for granulomas and a specificity of 90.91% for cysts and 100% for granulomas. In comparison, conventional intraoral radiography identified only 21 lesions (sensitivity of 78.9% for cysts and 45.4% for granulomas and specificity of 45.4% for cysts and 78.9% for granulomas). There was definite correlation between the echotexture of the lesions and the histopathological features except in one case. Conclusions: USG imaging with color Doppler and power Doppler is superior to conventional intraoral radiographic methods for diagnosing the nature of periapical lesions in the anterior jaws. This study reveals the potential of USG examination in the study of other jaw lesions. PMID:22223940
Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki
2015-04-30
Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
An efficient ensemble learning method for gene microarray classification.
Osareh, Alireza; Shadgar, Bita
2013-01-01
The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
Hua, Rui; Sun, Su-Qin; Zhou, Qun; Noda, Isao; Wang, Bao-Qin
2003-09-19
Fritillaria is a traditional Chinese herbal medicine for eliminating phlegm and relieving a cough with a long history in China and some other Asian countries. The objective of this study is to develop a nondestructive and accurate method to discriminate Fritillaria of different geographical origins, which is a troublesome work by existing analytical methods. We conducted a systematic study on five kinds of Fritillaria by Fourier transform infrared spectroscopy, second derivative infrared spectroscopy, and two-dimensional (2D) correlation infrared spectroscopy under thermal perturbation. Because Fritillaria consist of a large amount of starch, the conventional IR spectra of different Fritillaria only have very limited spectral feature differences. Based on these differences, we can separate different Fritillaria to a limited extent, but this method was deemed not very practical. The second derivative IR spectra of Fritillaria could enhance spectrum resolution, amplify the differences between the IR spectra of different Fritillaria, and provide some dissimilarity in their starch content, when compared with the spectrum of pure starch. Finally, we applied thermal perturbation to Fritillaria and analyzed the resulting spectra by the 2D correlation method to distinguish different Fritillaria easily and clearly. The distinction of very similar Fritillaria was possible because the spectral resolution was greatly enhanced by the 2D correlation spectroscopy. In addition, with the dynamic information of molecular structure provided by 2D correlation IR spectra, we studied the differences in the stability of active components of Fritillaria. The differences embodied mainly on the intensity ratio of the auto-peak at 985 cm(-1) and other auto-peaks. The 2D correlation IR spectroscopy (2D IR) of Fritillaria can be a new and powerful method to discriminate Fritillaria.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Epidermal segmentation in high-definition optical coherence tomography.
Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang
2015-01-01
Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.
NASA Astrophysics Data System (ADS)
Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan
2014-07-01
A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.
NASA Astrophysics Data System (ADS)
Cinar, A. F.; Barhli, S. M.; Hollis, D.; Flansbjer, M.; Tomlinson, R. A.; Marrow, T. J.; Mostafavi, M.
2017-09-01
Digital image correlation has been routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. Accurate segmentation of the crack path is needed to study its interaction with the microstructure and stress fields, and studies of crack behaviour, such as the effect of closure or residual stress in fatigue, require data on its opening displacement. Such information can be obtained from any digital image correlation analysis of cracked components, but it collection by manual methods is quite onerous, particularly for massive amounts of data. We introduce the novel application of Phase Congruency to detect and quantify cracks and their opening. Unlike other crack detection techniques, Phase Congruency does not rely on adjustable threshold values that require user interaction, and so allows large datasets to be treated autonomously. The accuracy of the Phase Congruency based algorithm in detecting cracks is evaluated and compared with conventional methods such as Heaviside function fitting. As Phase Congruency is a displacement-based method, it does not suffer from the noise intensification to which gradient-based methods (e.g. strain thresholding) are susceptible. Its application is demonstrated to experimental data for cracks in quasi-brittle (Granitic rock) and ductile (Aluminium alloy) materials.
Westhoff, Connie M.; Uy, Jon Michael; Aguad, Maria; Smeland‐Wagman, Robin; Kaufman, Richard M.; Rehm, Heidi L.; Green, Robert C.; Silberstein, Leslie E.
2015-01-01
BACKGROUND There are 346 serologically defined red blood cell (RBC) antigens and 33 serologically defined platelet (PLT) antigens, most of which have known genetic changes in 45 RBC or six PLT genes that correlate with antigen expression. Polymorphic sites associated with antigen expression in the primary literature and reference databases are annotated according to nucleotide positions in cDNA. This makes antigen prediction from next‐generation sequencing data challenging, since it uses genomic coordinates. STUDY DESIGN AND METHODS The conventional cDNA reference sequences for all known RBC and PLT genes that correlate with antigen expression were aligned to the human reference genome. The alignments allowed conversion of conventional cDNA nucleotide positions to the corresponding genomic coordinates. RBC and PLT antigen prediction was then performed using the human reference genome and whole genome sequencing (WGS) data with serologic confirmation. RESULTS Some major differences and alignment issues were found when attempting to convert the conventional cDNA to human reference genome sequences for the following genes: ABO, A4GALT, RHD, RHCE, FUT3, ACKR1 (previously DARC), ACHE, FUT2, CR1, GCNT2, and RHAG. However, it was possible to create usable alignments, which facilitated the prediction of all RBC and PLT antigens with a known molecular basis from WGS data. Traditional serologic typing for 18 RBC antigens were in agreement with the WGS‐based antigen predictions, providing proof of principle for this approach. CONCLUSION Detailed mapping of conventional cDNA annotated RBC and PLT alleles can enable accurate prediction of RBC and PLT antigens from whole genomic sequencing data. PMID:26634332
Strauss, Rupert W; Muñoz, Beatriz; Jha, Anamika; Ho, Alexander; Cideciyan, Artur V; Kasilian, Melissa L; Wolfson, Yulia; Sadda, SriniVas; West, Sheila; Scholl, Hendrik P N; Michaelides, Michel
2016-08-01
To compare grading results between short-wavelength reduced-illuminance and conventional autofluorescence imaging in Stargardt macular dystrophy. Reliability study. setting: Moorfields Eye Hospital, London (United Kingdom). Eighteen patients (18 eyes) with Stargardt macular dystrophy. A series of 3 fundus autofluorescence images using 3 different acquisition parameters on a custom-patched device were obtained: (1) 25% laser power and total sensitivity 87; (2) 25% laser power and freely adjusted sensitivity; and (3) 100% laser power and freely adjusted total sensitivity (conventional). The total area of 2 hypoautofluorescent lesion types (definitely decreased autofluorescence and poorly demarcated questionably decreased autofluorescence) was measured. Agreement in grading between the 3 imaging methods was assessed by kappa coefficients (κ) and intraclass correlation coefficients. The mean ± standard deviation area for images acquired with 25% laser power and freely adjusted total sensitivity was 2.04 ± 1.87 mm(2) for definitely decreased autofluorescence (n = 15) and 1.86 ± 2.14 mm(2) for poorly demarcated questionably decreased autofluorescence (n = 12). The intraclass correlation coefficient (95% confidence interval) was 0.964 (0.929, 0.999) for definitely decreased autofluorescence and 0.268 (0.000, 0.730) for poorly demarcated questionably decreased autofluorescence. Short-wavelength reduced-illuminance and conventional fundus autofluorescence imaging showed good concordance in assessing areas of definitely decreased autofluorescence. However, there was significantly higher variability between imaging modalities for assessing areas of poorly demarcated questionably decreased autofluorescence. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A Calibration Method for Nanowire Biosensors to Suppress Device-to-device Variation
Ishikawa, Fumiaki N.; Curreli, Marco; Chang, Hsiao-Kang; Chen, Po-Chiang; Zhang, Rui; Cote, Richard J.; Thompson, Mark E.; Zhou, Chongwu
2009-01-01
Nanowire/nanotube biosensors have stimulated significant interest; however the inevitable device-to-device variation in the biosensor performance remains a great challenge. We have developed an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. The method is based on our discovery of a strong correlation between the biosensor gate dependence (dIds/dVg) and the absolute response (absolute change in current, ΔI). In2O3 nanowire based biosensors for streptavidin detection were used as the model system. Studying the liquid gate effect and ionic concentration dependence of strepavidin sensing indicates that electrostatic interaction is the dominant mechanism for sensing response. Based on this sensing mechanism and transistor physics, a linear correlation between the absolute sensor response (ΔI) and the gate dependence (dIds/dVg) is predicted and confirmed experimentally. Using this correlation, a calibration method was developed where the absolute response is divided by dIds/dVg for each device, and the calibrated responses from different devices behaved almost identically. Compared to the common normalization method (normalization of the conductance/resistance/current by the initial value), this calibration method was proved advantageous using a conventional transistor model. The method presented here substantially suppresses device-to-device variation, allowing the use of nanosensors in large arrays. PMID:19921812
Least-squares reverse time migration in elastic media
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
Deng, Jie; Virmani, Sumeet; Young, Joseph; Harris, Kathleen; Yang, Guang-Yu; Rademaker, Alfred; Woloschak, Gayle; Omary, Reed A.; Larson, Andrew C.
2010-01-01
Purpose To test the hypothesis that diffusion-weighted (DW)-PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) MRI provides more accurate liver tumor necrotic fraction (NF) and viable tumor volume (VTV) measurements than conventional DW-SE-EPI (spin echo echo-planar imaging) methods. Materials and Methods Our institutional Animal Care and Use Committee approved all experiments. In six rabbits implanted with 10 VX2 liver tumors, DW-PROPELLER and DW-SE-EPI scans were performed at contiguous axial slice positions covering each tumor volume. Apparent diffusion coefficient maps of each tumor were used to generate spatially resolved tumor viability maps for NF and VTV measurements. We compared NF, whole tumor volume (WTV), and VTV measurements to corresponding reference standard histological measurements based on correlation and concordance coefficients and the Bland–Altman analysis. Results DW-PROPELLER generally improved image quality with less distortion compared to DW-SE-EPI. DW-PROPELLER NF, WTV, and VTV measurements were strongly correlated and satisfactorily concordant with histological measurements. DW-SE-EPI NF measurements were weakly correlated and poorly concordant with histological measurements. Bland–Altman analysis demonstrated that DWPROPELLER WTV and VTV measurements were less biased from histological measurements than the corresponding DW-SE-EPI measurements. Conclusion DW-PROPELLER MRI can provide spatially resolved liver tumor viability maps for accurate NF and VTV measurements, superior to DW-SE-EPI approaches. DWPROPELLER measurements may serve as a noninvasive surrogate for pathology, offering the potential for more accurate assessments of therapy response than conventional anatomic size measurements. PMID:18407540
Burns, Angus; Dowling, Adam H; Garvey, Thérèse M; Fleming, Garry J P
2014-10-01
To investigate the inter-examiner variability of contact point displacement measurements (used to calculate the overall Little's Irregularity Index (LII) score) from digital models of the maxillary arch by four independent examiners. Maxillary orthodontic pre-treatment study models of ten patients were scanned using the Lava(tm) Chairside Oral Scanner (LCOS) and 3D digital models were created using Creo(®) computer aided design (CAD) software. Four independent examiners measured the contact point displacements of the anterior maxillary teeth using the software. Measurements were recorded randomly on three separate occasions by the examiners and the measurements (n=600) obtained were analysed using correlation analyses and analyses of variance (ANOVA). LII contact point displacement measurements for the maxillary arch were reproducible for inter-examiner assessment when using the digital method and were highly correlated between examiner pairs for contact point displacement measurements >2mm. The digital measurement technique showed poor correlation for smaller contact point displacement measurements (<2mm) for repeated measurements. The coefficient of variation (CoV) of the digital contact point displacement measurements highlighted 348 of the 600 measurements differed by more than 20% of the mean compared with 516 of 600 for the same measurements performed using the conventional LII measurement technique. Although the inter-examiner variability of LII contact point displacement measurements on the maxillary arch was reduced using the digital compared with the conventional LII measurement methodology, neither method was considered appropriate for orthodontic research purposes particularly when measuring small contact point displacements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Prediction of shear wave velocity using empirical correlations and artificial intelligence methods
NASA Astrophysics Data System (ADS)
Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad
2014-06-01
Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
Pickering, Ethan M; Hossain, Mohammad A; Mousseau, Jack P; Swanson, Rachel A; French, Roger H; Abramson, Alexis R
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Pickering, Ethan M.; Hossain, Mohammad A.; Mousseau, Jack P.; ...
2017-10-31
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). Themore » utility of a cross-sectional analysis of a sample set of building's electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged- Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15- minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures.« less
Hossain, Mohammad A.; Mousseau, Jack P.; Swanson, Rachel A.; French, Roger H.; Abramson, Alexis R.
2017-01-01
Current approaches to building efficiency diagnoses include conventional energy audit techniques that can be expensive and time consuming. In contrast, virtual energy audits of readily available 15-minute-interval building electricity consumption are being explored to provide quick, inexpensive, and useful insights into building operation characteristics. A cross sectional analysis of six buildings in two different climate zones provides methods for data cleaning, population-based building comparisons, and relationships (correlations) of weather and electricity consumption. Data cleaning methods have been developed to categorize and appropriately filter or correct anomalous data including outliers, missing data, and erroneous values (resulting in < 0.5% anomalies). The utility of a cross-sectional analysis of a sample set of building’s electricity consumption is found through comparisons of baseload, daily consumption variance, and energy use intensity. Correlations of weather and electricity consumption 15-minute interval datasets show important relationships for the heating and cooling seasons using computed correlations of a Time-Specific-Averaged-Ordered Variable (exterior temperature) and corresponding averaged variables (electricity consumption)(TSAOV method). The TSAOV method is unique as it introduces time of day as a third variable while also minimizing randomness in both correlated variables through averaging. This study found that many of the pair-wise linear correlation analyses lacked strong relationships, prompting the development of the new TSAOV method to uncover the causal relationship between electricity and weather. We conclude that a combination of varied HVAC system operations, building thermal mass, plug load use, and building set point temperatures are likely responsible for the poor correlations in the prior studies, while the correlation of time-specific-averaged-ordered temperature and corresponding averaged variables method developed herein adequately accounts for these issues and enables discovery of strong linear pair-wise correlation R values. TSAOV correlations lay the foundation for a new approach to building studies, that mitigates plug load interferences and identifies more accurate insights into weather-energy relationship for all building types. Over all six buildings analyzed the TSAOV method reported very significant average correlations per building of 0.94 to 0.82 in magnitude. Our rigorous statistics-based methods applied to 15-minute-interval electricity data further enables virtual energy audits of buildings to quickly and inexpensively inform energy savings measures. PMID:29088269
Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric
2006-04-01
Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions.
Yiannakas, Marios C; Tozer, Daniel J; Schmierer, Klaus; Chard, Declan T; Anderson, Valerie M; Altmann, Daniel R; Miller, David H; Wheeler-Kingshott, Claudia A M
2013-05-01
There are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans. To determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images. We obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML. Our study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively. ADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished.
Three-dimensional magnetic resonance angiography of vascular lesions in children.
Katayama, H; Shimizu, T; Tanaka, Y; Narabayashi, I; Tamai, H
2000-01-01
We applied three-dimensional (3D) magnetic resonance (MR) angiography to vascular lesions in children and evaluated the clinical usefulness of this technique. Ten patients, whose ages ranged from 1 month to 16 years, underwent 3D MR angiography for 12 vascular lesions, including lesions in seven pulmonary arteries, two thoracic aortae, a pair of renal arteries, and one iliac artery. Three-dimensional MR angiography was performed with body-or pelvic-phased array coils on a 1.5-T scanner using fast spoiled gradient echo sequence. Data were acquired with the following parameters: TE, 1.9 ms; TR, 10.1 ms; flip angle, 20-60 degrees ; 1 or 2 NEX; field of view, 24-48 x 18-40 cm; matrix, 256 or 512 x 128 or 256; slice thickness, 1.2-7.5 mm; and 12, 28, or 60 partitions. Vascular imaging was enhanced with 20% gadolinium-diethylenetriaminepentaacetic acid. The examination was performed under breath-holding in six patients and with shallow breathing in four patients. In a comparative study with other noninvasive methods, 3D MR angiography was superior in seven of nine cases to other noninvasive examinations and in two cases, all methods evaluated the lesions. Furthermore, six cases were compared with conventional angiography. In five of the six cases, both methods depicted the lesions similarly, and in one case, MR angiography was more effective. A quantitative comparison of vascular diameter in the MR image was made with that in the conventional angiographic image. The correlation between them was excellent: y = 1.145x-2.090 (r = 0.987; P < 0.0001), where x is the diameter in the conventional angiographic images, y is the diameter in the MR images, and r is the correlation coefficient. In conclusion, 3D MR angiography is useful for depicting peripheral vascular lesions in children.
Kim, Keo-Sik; Seo, Jeong-Hwan; Song, Chul-Gyu
2011-08-10
Radiological scoring methods such as colon transit time (CTT) have been widely used for the assessment of bowel motility. However, these radiograph-based methods need cumbersome radiological instruments and their frequent exposure to radiation. Therefore, a non-invasive estimation algorithm of bowel motility, based on a back-propagation neural network (BPNN) model of bowel sounds (BS) obtained by an auscultation, was devised. Twelve healthy males (age: 24.8 ± 2.7 years) and 6 patients with spinal cord injury (6 males, age: 55.3 ± 7.1 years) were examined. BS signals generated during the digestive process were recorded from 3 colonic segments (ascending, descending and sigmoid colon), and then, the acoustical features (jitter and shimmer) of the individual BS segment were obtained. Only 6 features (J1, 3, J3, 3, S1, 2, S2, 1, S2, 2, S3, 2), which are highly correlated to the CTTs measured by the conventional method, were used as the features of the input vector for the BPNN. As a results, both the jitters and shimmers of the normal subjects were relatively higher than those of the patients, whereas the CTTs of the normal subjects were relatively lower than those of the patients (p < 0.01). Also, through k-fold cross validation, the correlation coefficient and mean average error between the CTTs measured by a conventional radiograph and the values estimated by our algorithm were 0.89 and 10.6 hours, respectively. The jitter and shimmer of the BS signals generated during the peristalsis could be clinically useful for the discriminative parameters of bowel motility. Also, the devised algorithm showed good potential for the continuous monitoring and estimation of bowel motility, instead of conventional radiography, and thus, it could be used as a complementary tool for the non-invasive measurement of bowel motility.
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Mapping Diffusion in a Living Cell via the Phasor Approach
Ranjit, Suman; Lanzano, Luca; Gratton, Enrico
2014-01-01
Diffusion of a fluorescent protein within a cell has been measured using either fluctuation-based techniques (fluorescence correlation spectroscopy (FCS) or raster-scan image correlation spectroscopy) or particle tracking. However, none of these methods enables us to measure the diffusion of the fluorescent particle at each pixel of the image. Measurement using conventional single-point FCS at every individual pixel results in continuous long exposure of the cell to the laser and eventual bleaching of the sample. To overcome this limitation, we have developed what we believe to be a new method of scanning with simultaneous construction of a fluorescent image of the cell. In this believed new method of modified raster scanning, as it acquires the image, the laser scans each individual line multiple times before moving to the next line. This continues until the entire area is scanned. This is different from the original raster-scan image correlation spectroscopy approach, where data are acquired by scanning each frame once and then scanning the image multiple times. The total time of data acquisition needed for this method is much shorter than the time required for traditional FCS analysis at each pixel. However, at a single pixel, the acquired intensity time sequence is short; requiring nonconventional analysis of the correlation function to extract information about the diffusion. These correlation data have been analyzed using the phasor approach, a fit-free method that was originally developed for analysis of FLIM images. Analysis using this method results in an estimation of the average diffusion coefficient of the fluorescent species at each pixel of an image, and thus, a detailed diffusion map of the cell can be created. PMID:25517145
Yukimasa, Nobuyasu; Miura, Keisuke; Miyagawa, Yukiko; Fukuchi, Kunihiko
2015-01-01
Automated nontreponemal and treponemal test reagents based on the latex agglutination method (immunoticles auto3 RPR: ITA3RPR and immunoticles auto3 TP: ITA3TP) have been developed to improve the issues of conventional manual methods such as their subjectivity, a massive amount of assays, and so on. We evaluated these reagents in regards to their performance, reactivity to antibody isotype, and their clinical significance. ITA3RPR and ITA3TP were measured using a clinical chemistry analyzer. Reactivity to antibody isotype was examined by gel filtration analysis. ITA3RPR and ITA3TP showed reactivity to both IgM- and IgG-class antibodies and detected early infections. ITA3RPR was verified to show a higher reactivity to IgM-class antibodies than the conventional methods. ITA3RPR correlated with VDRL in the high titer range, and measurement values decreased with treatment. ITA3RPR showed a negative result earlier after treatment than conventional methods. ITA3TP showed high specificity and did not give any false-negative reaction. Significant differences in the measurement values of ITA3RPR between the infective and previous group were verified. The double test of ITA3RPR and ITA3TP enables efficient and objective judgment for syphilis diagnosis and treatments, achieving clinical availability. Copyright © 2014 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Costa, M.O.L.P.; Heráclio, S.A.; Coelho, A.V.C.; Acioly, V.L.; Souza, P.R.E.; Correia, M.T.S.
2015-01-01
In the present study, we compared the performance of a ThinPrep cytological method with the conventional Papanicolaou test for diagnosis of cytopathological changes, with regard to unsatisfactory results achieved at the Central Public Health Laboratory of the State of Pernambuco. A population-based, cross-sectional study was performed with women aged 18 to 65 years, who spontaneously sought gynecological services in Public Health Units in the State of Pernambuco, Northeast Brazil, between April and November 2011. All patients in the study were given a standardized questionnaire on sociodemographics, sexual characteristics, reproductive practices, and habits. A total of 525 patients were assessed by the two methods (11.05% were under the age of 25 years, 30.86% were single, 4.4% had had more than 5 sexual partners, 44% were not using contraception, 38.85% were users of alcohol, 24.38% were smokers, 3.24% had consumed drugs previously, 42.01% had gynecological complaints, and 12.19% had an early history of sexually transmitted diseases). The two methods showed poor correlation (k=0.19; 95%CI=0.11–0.26; P<0.001). The ThinPrep method reduced the rate of unsatisfactory results from 4.38% to 1.71% (χ2=5.28; P=0.02), and the number of cytopathological changes diagnosed increased from 2.47% to 3.04%. This study confirmed that adopting the ThinPrep method for diagnosis of cervical cytological samples was an improvement over the conventional method. Furthermore, this method may reduce possible losses from cytological resampling and reduce obstacles to patient follow-up, improving the quality of the public health system in the State of Pernambuco, Northeast Brazil. PMID:26247400
Dalmora, Sergio Luiz; Nogueira, Daniele Rubert; D'Avila, Felipe Bianchini; Souto, Ricardo Bizogne; Leal, Diogo Paim
2011-01-01
A stability-indicating capillary zone electrophoresis (CZE) method was validated for the analysis of entecavir in pharmaceutical formulations, using nimesulide as an internal standard. A fused-silica capillary (50 µm i.d.; effective length, 40 cm) was used while being maintained at 25°C; the applied voltage was 25 kV. A background electrolyte solution consisted of a 20 mM sodium tetraborate solution at pH 10. Injections were performed using a pressure mode at 50 mbar for 5 s, with detection at 216 nm. The specificity and stability-indicating capability were proven through forced degradation studies, evaluating also the in vitro cytotoxicity test of the degraded products. The method was linear over the concentration range of 1-200 µg mL(-1) (r(2) = 0.9999), and was applied for the analysis of entecavir in tablet dosage forms. The results were correlated to those of validated conventional and fast LC methods, showing non-significant differences (p > 0.05).
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Short-range second order screened exchange correction to RPA correlation energies
NASA Astrophysics Data System (ADS)
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-01
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Short-range second order screened exchange correction to RPA correlation energies.
Beuerle, Matthias; Ochsenfeld, Christian
2017-11-28
Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.
Twelve-Year Follow-Up of Laser In Situ Keratomileusis for Moderate to High Myopia
Ikeda, Tetsuya; Igarashi, Akihito; Kasahara, Sumie
2017-01-01
Purpose To assess the long-term clinical outcomes of conventional laser in situ keratomileusis (LASIK) for moderate to high myopia. Methods We retrospectively examined sixty-eight eyes of 37 consecutive patients who underwent conventional LASIK for the correction of myopia (−3.00 to −12.75 diopters (D)). At 3 months and 1, 4, 8, and 12 years postoperatively, we assessed the safety, efficacy, predictability, stability, mean keratometry, central corneal thickness, and adverse events. Results The safety and efficacy indices were 0.82 ± 0.29 and 0.67 ± 0.37, respectively, 12 years postoperatively. At 12 years, 53% and 75% of the eyes were within 0.5 and 1.0 D, respectively, of the targeted correction. Manifest refraction changes of −0.74 ± 0.99 D occurred from 3 months to 12 years after LASIK (p < 0.001). We found a significant correlation of refractive regression with the changes in keratometric readings from 3 months to 12 years postoperatively (Pearson correlation coefficient, r = −0.28, p = 0.02), but not with the changes in central corneal thickness (r = −0.08, p = 0.63). No vision-threatening complications occurred in any case. Conclusions Conventional LASIK offered good safety outcomes during the 12-year observation period. However, the efficacy and the predictability gradually decreased with time owing to myopic regression in relation to corneal steepening. PMID:28596969
Parallel imaging of knee cartilage at 3 Tesla.
Zuo, Jin; Li, Xiaojuan; Banerjee, Suchandrima; Han, Eric; Majumdar, Sharmila
2007-10-01
To evaluate the feasibility and reproducibility of quantitative cartilage imaging with parallel imaging at 3T and to determine the impact of the acceleration factor (AF) on morphological and relaxation measurements. An eight-channel phased-array knee coil was employed for conventional and parallel imaging on a 3T scanner. The imaging protocol consisted of a T2-weighted fast spin echo (FSE), a 3D-spoiled gradient echo (SPGR), a custom 3D-SPGR T1rho, and a 3D-SPGR T2 sequence. Parallel imaging was performed with an array spatial sensitivity technique (ASSET). The left knees of six healthy volunteers were scanned with both conventional and parallel imaging (AF = 2). Morphological parameters and relaxation maps from parallel imaging methods (AF = 2) showed comparable results with conventional method. The intraclass correlation coefficient (ICC) of the two methods for cartilage volume, mean cartilage thickness, T1rho, and T2 were 0.999, 0.977, 0.964, and 0.969, respectively, while demonstrating excellent reproducibility. No significant measurement differences were found when AF reached 3 despite the low signal-to-noise ratio (SNR). The study demonstrated that parallel imaging can be applied to current knee cartilage quantification at AF = 2 without degrading measurement accuracy with good reproducibility while effectively reducing scan time. Shorter imaging times can be achieved with higher AF at the cost of SNR. (c) 2007 Wiley-Liss, Inc.
Kesharwani, Manoj K; Manna, Debashree; Sylvetsky, Nitai; Martin, Jan M L
2018-03-01
We have re-evaluated the X40×10 benchmark for halogen bonding using conventional and explicitly correlated coupled cluster methods. For the aromatic dimers at small separation, improved CCSD(T)-MP2 "high-level corrections" (HLCs) cause substantial reductions in the dissociation energy. For the bromine and iodine species, (n-1)d subvalence correlation increases dissociation energies and turns out to be more important for noncovalent interactions than is generally realized; (n-1)sp subvalence correlation is much less important. The (n-1)d subvalence term is dominated by core-valence correlation; with the smaller cc-pVDZ-F12-PP and cc-pVTZ-F12-PP basis sets, basis set convergence for the core-core contribution becomes sufficiently erratic that it may compromise results overall. The two factors conspire to generate discrepancies of up to 0.9 kcal/mol (0.16 kcal/mol RMS) between the original X40×10 data and the present revision.
A Way to Select Electrical Sheets of the Segment Stator Core Motors.
NASA Astrophysics Data System (ADS)
Enomoto, Yuji; Kitamura, Masashi; Sakai, Toshihiko; Ohara, Kouichiro
The segment stator core, high density winding coil, high-energy-product permanent magnet are indispensable technologies in the development of a compact and also high efficient motors. The conventional design method for the segment stator core mostly depended on experienced knowledge of selecting a suitable electromagnetic material, far from optimized design. Therefore, we have developed a novel design method in the selection of a suitable electromagnetic material based on the correlation evaluation between the material characteristics and motor performance. It enables the selection of suitable electromagnetic material that will meet the motor specification.
Effect of CorrelatedRotational Noise
NASA Astrophysics Data System (ADS)
Hancock, Benjamin; Wagner, Caleb; Baskaran, Aparna
The traditional model of a self-propelled particle (SPP) is one where the body axis along which the particle travels reorients itself through rotational diffusion. If the reorientation process was driven by colored noise, instead of the standard Gaussian white noise, the resulting statistical mechanics cannot be accessed through conventional methods. In this talk we present results comparing three methods of deriving the statistical mechanics of a SPP with a reorientation process driven by colored noise. We illustrate the differences/similarities in the resulting statistical mechanics by their ability to accurately capture the particles response to external aligning fields.
Cullinan, David B; Hondrogiannis, George; Henderson, Terry J
2008-04-15
Two-dimensional 1H-13C HSQC (heteronuclear single quantum correlation) and fast-HMQC (heteronuclear multiple quantum correlation) pulse sequences were implemented using a sensitivity-enhanced, cryogenic probehead for detecting compounds relevant to the Chemical Weapons Convention present in complex mixtures. The resulting methods demonstrated exceptional sensitivity for detecting the analytes at trace level concentrations. 1H-13C correlations of target analytes at < or = 25 microg/mL were easily detected in a sample where the 1H solvent signal was approximately 58,000-fold more intense than the analyte 1H signals. The problem of overlapping signals typically observed in conventional 1H spectroscopy was essentially eliminated, while 1H and 13C chemical shift information could be derived quickly and simultaneously from the resulting spectra. The fast-HMQC pulse sequences generated magnitude mode spectra suitable for detailed analysis in approximately 4.5 h and can be used in experiments to efficiently screen a large number of samples. The HSQC pulse sequences, on the other hand, required roughly twice the data acquisition time to produce suitable spectra. These spectra, however, were phase-sensitive, contained considerably more resolution in both dimensions, and proved to be superior for detecting analyte 1H-13C correlations. Furthermore, a HSQC spectrum collected with a multiplicity-edited pulse sequence provided additional structural information valuable for identifying target analytes. The HSQC pulse sequences are ideal for collecting high-quality data sets with overnight acquisitions and logically follow the use of fast-HMQC pulse sequences to rapidly screen samples for potential target analytes. Use of the pulse sequences considerably improves the performance of NMR spectroscopy as a complimentary technique for the screening, identification, and validation of chemical warfare agents and other small-molecule analytes present in complex mixtures and environmental samples.
NASA Astrophysics Data System (ADS)
Aoun, Bachir; Yu, Cun; Fan, Longlong; Chen, Zonghai; Amine, Khalil; Ren, Yang
2015-04-01
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ high-energy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transition and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aoun, Bachir; Yu, Cun; Fan, Longlong
A generalized method is introduced to extract critical information from series of ranked correlated data. The method is generally applicable to all types of spectra evolving as a function of any arbitrary parameter. This approach is based on correlation functions and statistical scedasticity formalism. Numerous challenges in analyzing high throughput experimental data can be tackled using the herein proposed method. We applied this method to understand the reactivity pathway and formation mechanism of a Li-ion battery cathode material during high temperature synthesis using in-situ highenergy X-ray diffraction. We demonstrate that Pearson's correlation function can easily unravel all major phase transitionmore » and, more importantly, the minor structural changes which cannot be revealed by conventionally inspecting the series of diffraction patterns. Furthermore, a two-dimensional (2D) reactivity pattern calculated as the scedasticity along all measured reciprocal space of all successive diffraction pattern pairs unveils clearly the structural evolution path and the active areas of interest during the synthesis. The methods described here can be readily used for on-the-fly data analysis during various in-situ operando experiments in order to quickly evaluate and optimize experimental conditions, as well as for post data analysis and large data mining where considerable amount of data hinders the feasibility of the investigation through point-by-point inspection.« less
Bio-barcode gel assay for microRNA
NASA Astrophysics Data System (ADS)
Lee, Hyojin; Park, Jeong-Eun; Nam, Jwa-Min
2014-02-01
MicroRNA has been identified as a potential biomarker because expression level of microRNA is correlated with various cancers. Its detection at low concentrations would be highly beneficial for cancer diagnosis. Here, we develop a new type of a DNA-modified gold nanoparticle-based bio-barcode assay that uses a conventional gel electrophoresis platform and potassium cyanide chemistry and show this assay can detect microRNA at aM levels without enzymatic amplification. It is also shown that single-base-mismatched microRNA can be differentiated from perfectly matched microRNA and the multiplexed detection of various combinations of microRNA sequences is possible with this approach. Finally, differently expressed microRNA levels are selectively detected from cancer cells using the bio-barcode gel assay, and the results are compared with conventional polymerase chain reaction-based results. The method and results shown herein pave the way for practical use of a conventional gel electrophoresis for detecting biomolecules of interest even at aM level without polymerase chain reaction amplification.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Yasukawa, Keiko; Shimosawa, Tatsuo; Okubo, Shigeo; Yatomi, Yutaka
2018-01-01
Background Human mercaptalbumin and human non-mercaptalbumin have been reported as markers for various pathological conditions, such as kidney and liver diseases. These markers play important roles in redox regulations throughout the body. Despite the recognition of these markers in various pathophysiologic conditions, the measurements of human mercaptalbumin and non-mercaptalbumin have not been popular because of the technical complexity and long measurement time of conventional methods. Methods Based on previous reports, we explored the optimal analytical conditions for a high-performance liquid chromatography method using an anion-exchange column packed with a hydrophilic polyvinyl alcohol gel. The method was then validated using performance tests as well as measurements of various patients' serum samples. Results We successfully established a reliable high-performance liquid chromatography method with an analytical time of only 12 min per test. The repeatability (within-day variability) and reproducibility (day-to-day variability) were 0.30% and 0.27% (CV), respectively. A very good correlation was obtained with the results of the conventional method. Conclusions A practical method for the clinical measurement of human mercaptalbumin and non-mercaptalbumin was established. This high-performance liquid chromatography method is expected to be a powerful tool enabling the expansion of clinical usefulness and ensuring the elucidation of the roles of albumin in redox reactions throughout the human body.
MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, J; Liu, W; Bues, M
Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, threemore » kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of deliverable minimal MU. Compared to conventional post-processing method, our new method of incorporating deliverable minimal MU-constraints directly into plan optimization, can produce machine-deliverable plans with better plan qualities and non-compromised plan robustness. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant and by The Kemper Marley Foundation.« less
Richter, Craig G; Thompson, William H; Bosman, Conrado A; Fries, Pascal
2015-07-01
The quantification of covariance between neuronal activities (functional connectivity) requires the observation of correlated changes and therefore multiple observations. The strength of such neuronal correlations may itself undergo moment-by-moment fluctuations, which might e.g. lead to fluctuations in single-trial metrics such as reaction time (RT), or may co-fluctuate with the correlation between activity in other brain areas. Yet, quantifying the relation between moment-by-moment co-fluctuations in neuronal correlations is precluded by the fact that neuronal correlations are not defined per single observation. The proposed solution quantifies this relation by first calculating neuronal correlations for all leave-one-out subsamples (i.e. the jackknife replications of all observations) and then correlating these values. Because the correlation is calculated between jackknife replications, we address this approach as jackknife correlation (JC). First, we demonstrate the equivalence of JC to conventional correlation for simulated paired data that are defined per observation and therefore allow the calculation of conventional correlation. While the JC recovers the conventional correlation precisely, alternative approaches, like sorting-and-binning, result in detrimental effects of the analysis parameters. We then explore the case of relating two spectral correlation metrics, like coherence, that require multiple observation epochs, where the only viable alternative analysis approaches are based on some form of epoch subdivision, which results in reduced spectral resolution and poor spectral estimators. We show that JC outperforms these approaches, particularly for short epoch lengths, without sacrificing any spectral resolution. Finally, we note that the JC can be applied to relate fluctuations in any smooth metric that is not defined on single observations. Copyright © 2015. Published by Elsevier Inc.
Quantum tomography for collider physics: illustrations with lepton-pair production
NASA Astrophysics Data System (ADS)
Martens, John C.; Ralston, John P.; Takaki, J. D. Tapia
2018-01-01
Quantum tomography is a method to experimentally extract all that is observable about a quantum mechanical system. We introduce quantum tomography to collider physics with the illustration of the angular distribution of lepton pairs. The tomographic method bypasses much of the field-theoretic formalism to concentrate on what can be observed with experimental data. We provide a practical, experimentally driven guide to model-independent analysis using density matrices at every step. Comparison with traditional methods of analyzing angular correlations of inclusive reactions finds many advantages in the tomographic method, which include manifest Lorentz covariance, direct incorporation of positivity constraints, exhaustively complete polarization information, and new invariants free from frame conventions. For example, experimental data can determine the entanglement entropy of the production process. We give reproducible numerical examples and provide a supplemental standalone computer code that implements the procedure. We also highlight a property of complex positivity that guarantees in a least-squares type fit that a local minimum of a χ 2 statistic will be a global minimum: There are no isolated local minima. This property with an automated implementation of positivity promises to mitigate issues relating to multiple minima and convention dependence that have been problematic in previous work on angular distributions.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT.
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-12-09
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-01-01
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object. PMID:29232850
Warndahl, Brent A; Borisch, Eric A; Kawashima, Akira; Riederer, Stephen J; Froemming, Adam T
2018-04-01
To evaluate if Field of view Optimized and Constrained Undistorted Single shot (FOCUS) (GE Healthcare, Waukesha, WI) diffusion weighted images (DWI) provide more reliable imaging than conventional DWI, with non-inferior quantitative apparent diffusion coefficient (ADC) results. IRB approval was obtained for this study of 43 patients (44 exams, one patient with two visits) that underwent multiparametric prostate MRI with two DWI sequences and subsequent radical prostatectomy with histology as the gold standard. Randomized DWI sequence images were graded independently by two blinded experienced prostate MRI radiologists with a period of memory extinction between the two separate reading sessions. Blinded images were also reviewed head to head in a later session for direct comparison. Multiple parameters were measured from a region of interest in a dominant lesion as well as two control areas. Patient characteristics were collected by chart review. There was good correlation between the mean ADC value for lesions obtained by conventional and FOCUS DWI (ρ=0.85), with no trend toward any systematic difference, and equivalent correlation between ADC measurements and Gleason score. Agreement between the two readers was significantly higher for lesion ROI analysis with the FOCUS DWI derived ADC values (CCC 0.839) compared with the conventional ADC values (CCC 0.618; difference 0.221, 95% CI 0.01-0.46). FOCUS showed significantly better image quality scores (separate review: mean 2.17±0.6, p<0.001) compared to the conventional sequence (mean 2.65±0.6, p<0.001). In 13 cases the image quality was improved from grade of 3+ with conventional DWI to <3 with FOCUS DWI, a clinically meaningful improvement. Head-to-head blinded review found 61 ratings showed strong to slight preference for FOCUS, 13 no preference, and 14 slight preference for the conventional sequence. There was also a strong and equivalent correlation between both sequences and PIRADS version 2 grading (ρ=-0.56 and -0.58 for FOCUS and conventional, respectively, p<0.001 for both). FOCUS DWI of the prostate shows significant improvement in inter-reader agreement and image quality. As opposed to previous conflicting smaller studies, we found equivalent ADC metrics compared with the conventional DWI sequence, and preserved correlation with Gleason score. In 52% of patients the improved image quality with FOCUS had the potential to salvage exams with otherwise limited to non-diagnostic DWI. Copyright © 2017 Elsevier Inc. All rights reserved.
Sex Differences in Correlates of Abortion Attitudes among College Students.
ERIC Educational Resources Information Center
Finlay, Barbara Agresti
1981-01-01
Data from a sample of students showed that males' abortion attitudes are related primarily to their degree of conventionality; females' abortion attitudes are related to sex-role conventionality, the value of children in their life plans, the "right to life" issue, and sexual and general conventionality. (Author)
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
NASA Astrophysics Data System (ADS)
Kramer, Hendrik; Klein, Marcus; Eifler, Dietmar
Conventional methods to characterize the fatigue behavior of metallic materials are very time and cost consuming. That is why the new short-time procedure PHYBALCHT was developed at the Institute of Materials Science and Engineering at the University of Kaiserslautern. This innovative method requires only a planar material surface to perform cyclic force-controlled hardness indentation tests. To characterize the cyclic elastic-plastic behavior of the test material the change of the force-indentation-depth-hysteresis is plotted versus the number of indentation cycles. In accordance to the plastic strain amplitude the indentation-depth width of the hysteresis loop is measured at half minimum force and is called plastic indentation-depth amplitude. Its change as a function of the number of cycles of indentation can be described by power-laws. One of these power-laws contains the hardening-exponentCHT e II , which correlates very well with the amount of cyclic hardening in conventional constant amplitude fatigue tests.
A Fast Approach to Automatic Detection of Brain Lesions
Koley, Subhranil; Chakraborty, Chandan; Mainero, Caterina; Fischl, Bruce; Aganj, Iman
2017-01-01
Template matching is a popular approach to computer-aided detection of brain lesions from magnetic resonance (MR) images. The outcomes are often sufficient for localizing lesions and assisting clinicians in diagnosis. However, processing large MR volumes with three-dimensional (3D) templates is demanding in terms of computational resources, hence the importance of the reduction of computational complexity of template matching, particularly in situations in which time is crucial (e.g. emergent stroke). In view of this, we make use of 3D Gaussian templates with varying radii and propose a new method to compute the normalized cross-correlation coefficient as a similarity metric between the MR volume and the template to detect brain lesions. Contrary to the conventional fast Fourier transform (FFT) based approach, whose runtime grows as O(N logN) with the number of voxels, the proposed method computes the cross-correlation in O(N). We show through our experiments that the proposed method outperforms the FFT approach in terms of computational time, and retains comparable accuracy. PMID:29082383
Tran, Hanh T M; Stephenson, Steven L; Tullis, Jason A
2015-01-01
The conventional method used to assess growth of the plasmodium of the slime mold Physarum polycephalum in solid culture is to measure the extent of plasmodial expansion from the point of inoculation by using a ruler. However, plasmodial growth is usually rather irregular, so the values obtained are not especially accurate. Similar challenges exist in quantification of the growth of a fungal mycelium. In this paper, we describe a method that uses geographic information system software to obtain highly accurate estimates of plasmodial growth over time. This approach calculates plasmodial area from images obtained at particular intervals following inoculation. In addition, the correlation between plasmodial area and its dry cell weight value was determined. The correlation could be used for biomass estimation without the need of having to terminate the cultures in question. The method described herein is simple but effective and could also be used for growth measurements of other microorganisms such as fungi on solid media.
An image registration-based technique for noninvasive vascular elastography
NASA Astrophysics Data System (ADS)
Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza
2018-02-01
Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.
Criteria for Mitral Regurgitation Classification were inadequate for Dilated Cardiomyopathy
Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; de Paola, Angelo Amato Vincenzo; Carvalho, Antonio Carlos Camargo; Campos, Orlando
2013-01-01
Background Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. Objective We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Methods Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. Results MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa = 0.11; p < 0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p < 0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) Conclusion The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients. PMID:24100692
Kotani, Kiyoshi; Takamasu, Kiyoshi; Tachibana, Makoto
2007-01-01
The objectives of this paper were to present a method to extract the amplitude of RSA in the respiratory-phase domain, to compare that with subjective or objective indices of the MWL (mental workload), and to compare that with a conventional frequency analysis in terms of its accuracy during a mental arithmetic task. HRV (heart rate variability), ILV (instantaneous lung volume), and motion of the throat were measured under a mental arithmetic experiment and subjective and objective indices were also obtained. The amplitude of RSA was extracted in the respiratory-phase domain, and its correlation with the load level was compared with the results of the frequency domain analysis, which is the standard analysis of the HRV. The subjective and objective indices decreased as the load level increased, showing that the experimental protocol was appropriate. Then, the amplitude of RSA in the respiratory-phase domain also decreased with the increase in the load level. The results of the correlation analysis showed that the respiratory-phase domain analysis has higher negative correlations, -0.84 and -0.82, with the load level as determined by simple correlation and rank correlation, respectively, than does frequency analysis, for which the correlations were found to be -0.54 and -0.63, respectively. In addition, it was demonstrated that the proposed method could be applied to the short-term extraction of RSA amplitude. We proposed a simple and effective method to extract the amplitude of the respiratory sinus arrhythmia (RSA) in the respiratory-phase domain and the results show that this method can estimate cardiac vagal activity more accurately than frequency analysis.
Experiences with Probabilistic Analysis Applied to Controlled Systems
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Giesy, Daniel P.
2004-01-01
This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.
Sun, Xue; Zhang, Ying; Fan, Miao; Wang, Yu; Wang, Meilian; Siddiqui, Faiza Amber; Sun, Wei; Sun, Feifei; Zhang, Dongyu; Lei, Wenjia; Hu, Guyue
2017-06-01
Prenatal diagnosis of fetal total anomalous pulmonary vein connection (TAPVC) remains challenging for most screening sonographers. The purpose of this study was to evaluate the use of four-dimensional echocardiography with high-definition flow imaging and spatiotemporal image correlation (4D-HDFI) in identifying pulmonary veins in normal and TAPVC fetuses. We retrospectively reviewed and performed 4D-HDFI in 204 normal and 12 fetuses with confirmed diagnosis of TAPVC. Cardiac volumes were available for postanalysis to obtain 4D-rendered images of the pulmonary veins. For the normal fetuses, two other traditional modalities including color Doppler and HDFI were used to detect the number of pulmonary veins and comparisons were made between each of these traditional methods and 4D-HDFI. For conventional echocardiography, HDFI modality was superior to color Doppler in detecting more pulmonary veins in normal fetuses throughout the gestational period. 4D-HDFI was the best method during the second trimester of pregnancy in identifying normal fetal pulmonary veins. 4D-HDFI images vividly depicted the figure, course, and drainage of pulmonary veins in both normal and TAPVC fetuses. HDFI and the advanced 4D-HDFI technique could facilitate identification of the anatomical features of pulmonary veins in both normal and TAPVC fetuses; 4D-HDFI therefore provides additional and more precise information than conventional echocardiography techniques. © 2017, Wiley Periodicals, Inc.
Cuong, Nguyen Ngoc; Luu, Vu Dang; Tuan, Tran Anh; Linh, Le Tuan; Hung, Kieu Dinh; Ngoc, Vo Truong Nhu; Sharma, Kulbhushan; Pham, Van Huy; Chu, Dinh-Toi
2018-06-01
Digital subtractional angiography (DSA) is the standard method for diagnosis, assessment and management of arteriovenous malformation in the brain. Conventional DSA (cDSA) is an invasive imaging modality that is often indicated before interventional treatments (embolization, open surgery, gamma knife). Here, we aimed to compare this technique with a non-invasive MR angiography (MRI DSA) for brain arteriovenous malformation (bAVM). Fourteen patients with ruptured brain AVM underwent embolization treatment pre-operation. Imaging was performed for all patients using MRI (1.5 T). After injecting contrast Gadolinium, dynamic MRI was performed with 40 phases, each phase of a duration of 1.2 s and having 70 images. The MRI results were independently assessed by experienced radiologist blinded to the cDSA. The AVM nidus was depicted in all patients using cDSA and MRI DSA; there was an excellent correlation between these techniques in terms of the maximum diameter and Spetzler Martin grading. Of the fourteen patients, the drainage vein was depicted in 13 by both cDSA and MRI DSA showing excellent correlation between the techniques used. MRI DSA is a non-invasive imaging modality that can give the images in dynamic view. It can be considered as an adjunctive method with cDSA to plan the strategy treatment for bAVM. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.
2015-03-01
Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.
Śmiga, Szymon; Fabiano, Eduardo; Laricchia, Savio; Constantin, Lucian A; Della Sala, Fabio
2015-04-21
We analyze the methodology and the performance of subsystem density functional theory (DFT) with meta-generalized gradient approximation (meta-GGA) exchange-correlation functionals for non-bonded molecular systems. Meta-GGA functionals depend on the Kohn-Sham kinetic energy density (KED), which is not known as an explicit functional of the density. Therefore, they cannot be directly applied in subsystem DFT calculations. We propose a Laplacian-level approximation to the KED which overcomes this limitation and provides a simple and accurate way to apply meta-GGA exchange-correlation functionals in subsystem DFT calculations. The so obtained density and energy errors, with respect to the corresponding supermolecular calculations, are comparable with conventional approaches, depending almost exclusively on the approximations in the non-additive kinetic embedding term. An embedding energy error decomposition explains the accuracy of our method.
Covariance descriptor fusion for target detection
NASA Astrophysics Data System (ADS)
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
Method of encouraging attention by correlating video game difficulty with attention level
NASA Technical Reports Server (NTRS)
Pope, Alan T. (Inventor); Bogart, Edward H. (Inventor)
1994-01-01
A method of encouraging attention in persons such as those suffering from Attention Deficit Disorder is provided by correlating the level of difficulty of a video game with the level of attention in a subject. A conventional video game comprises a video display which depicts objects for interaction with a player and a difficulty adjuster which increases the difficulty level, e.g., action speed and/or evasiveness of the depicted object, in a predetermined manner. The electrical activity of the brain is measured at selected sites to determine levels of awareness, e.g., activity in the beta, theta, and alpha states. A value is generated based on this measured electrical signal which is indicative of the level of awareness. The difficulty level of the game is increased as the awareness level value decreases and is decreased as this awareness level value increases.
Charmonium resonances on the lattice
NASA Astrophysics Data System (ADS)
Bali, Gunnar; Collins, Sara; Mohler, Daniel; Padmanath, M.; Piemonte, Stefano; Prelovsek, Sasa; Weishäupl, Simon
2018-03-01
The nature of resonances and excited states near decay thresholds is encoded in scattering amplitudes, which can be extracted from single-particle and multiparticle correlators in finite volumes. Lattice calculations have only recently reached the precision required for a reliable study of such correlators. The distillation method represents a significant improvement insofar as it simplifies quark contractions and allows one to easily extend the operator basis used to construct interpolators. We present preliminary results on charmonium bound states and resonances on the Nf = 2+1 CLS ensembles. The long term goal of our investigation is to understand the properties of the X resonances that do not fit into conventional models of quark-antiquark mesons. We tune various parameters of the distillation method and the charm quark mass. As a first result, we present the masses of the ground and excited states in the 0++ and 1- channels
Local density approximation in site-occupation embedding theory
NASA Astrophysics Data System (ADS)
Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel
2017-01-01
Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.
Comparative study of flow condensation in conventional and small diameter tubes
NASA Astrophysics Data System (ADS)
Mikielewicz, Dariusz; Andrzejczyk, Rafał
2012-10-01
Flow boiling and flow condensation are often regarded as two opposite or symmetrical phenomena. Their description however with a single correlation has yet to be suggested. In the case of flow boiling in minichannels there is mostly encountered the annular flow structure, where the bubble generation is not present. Similar picture holds for the case of inside tube condensation, where annular flow structure predominates. In such case the heat transfer coefficient is primarily dependent on the convective mechanism. In the paper a method developed earlier by the first author is applied to calculations of heat transfer coefficient for inside tube condensation. The method has been verified using experimental data from literature on several fluids in different microchannels and compared to three well established correlations for calculations of heat transfer coefficient in flow condensation. It clearly stems from the results presented here that the flow condensation can be modeled in terms of appropriately devised pressure drop.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kim, Se-Young; Kim, Kyoung Won; Choi, Sang Hyun; Kwon, Jae Hyun; Song, Gi-Won; Kwon, Heon-Ju; Yun, Young Ju; Lee, Jeongjin; Lee, Sung-Gyu
2017-11-01
To determine the feasibility of using UltraFast Doppler in post-operative evaluation of the hepatic artery (HA) after liver transplantation (LT), we evaluated 283 simultaneous conventional and UltraFast Doppler sessions in 126 recipients over a 2-mo period after LT, using an Aixplorer scanner The Doppler indexes of the HA (peak systolic velocity [PSV], end-diastolic velocity [EDV], resistive index [RI] and systolic acceleration time [SAT]) by retrospective analysis of retrieved waves from UltraFast Doppler clips were compared with those obtained by conventional spectral Doppler. Correlation, performance in diagnosing the pathologic wave, examination time and reproducibility were evaluated. The PSV, EDV, RI and SAT of spectral and UltraFast Doppler measurements exhibited excellent correlation with favorable diagnostic performance. During the bedside examination, the mean time spent for UltraFast clip storing was significantly shorter than that for conventional Doppler US measurements. Both conventional and UltraFast Doppler exhibited good to excellent inter-analysis consistency. In conclusion, compared with conventional spectral Doppler, UltraFast Doppler values correlated excellently and yielded acceptable pathologic wave diagnostic performance with reduced examination time at the bedside and excellent reproducibility. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
An ultrasonic pseudorandom signal-correlation system
NASA Astrophysics Data System (ADS)
Elias, C. M.
1980-01-01
A working ultrasonic pseudorandom signal-correlation system is described which, unlike ultrasonic random signal-correlation systems, does not require an acoustic delay line. Elimination of the delay line allows faster data acquisition and better range resolution. The system uses two identical shift-register type generators to produce pseudonoise bursts which are subsequences of a 65 535-bit complementary m-sequence. One generator produces the transmitted bursts while the other generates identical reference bursts which start at a variable correlation delay time after the transmitted bursts. The reference bursts are cross-correlated with the received echoes to obtain the approximate impulse response of the transducer/specimen system under test. Range sidelobes are reduced by transmitting and correlating many bursts at a given correlation delay before incrementing the delay. Signal-to-sidelobe ratios of greater than 47 dB have been obtained using this method. Limitations of the system due to sampling constraints and the pseudonoise power spectrum are discussed, and the system design and implementation are outlined. Results of experimental characterization of the system show that the pseudorandom signal-correlation system has approximately the same range resolution as a conventional pulse-echo system but can yield a significant increase in signal-to-noise ratio (SNR).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyuboshitz, V. L.; Lyuboshitz, V. V., E-mail: Valery.Lyuboshitz@jinr.r
2010-05-15
Spin correlations for the {Lambda}{Lambda} and {Lambda}{Lambda}-bar pairs, generated in relativistic heavy-ion collisions, and related angular correlations at the joint registration of hadronic decays of two hyperons, in which space parity is not conserved, are analyzed. The correlation tensor components can be derived from the double angular distribution of products of two decays by the method of 'moments'. The properties of the 'trace' of the correlation tensor (a sum of three diagonal components), determining the relative fractions of the triplet states and singlet state of respective pairs, are discussed. Spin correlations for two identical particles ({Lambda}{Lambda}) and two nonidentical particlesmore » ({Lambda}{Lambda}-bar) are considered from the viewpoint of the conventional model of one-particle sources. In the framework of this model, correlations vanish at sufficiently large relative momenta. However, under these conditions, in the case of two nonidentical particles ({Lambda}{Lambda}-bar) a noticeable role is played by two-particle annihilation (two-quark, two-gluon) sources, which lead to the difference of the correlation tensor from zero. In particular, such a situation may arise when the system passes through the 'mixed phase.'« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less
Characterizing iron deposition in multiple sclerosis lesions using susceptibility weighted imaging
Haacke, E. Mark; Makki, Malek; Ge, Yulin; Maheshwari, Megha; Sehgal, Vivek; Hu, Jiani; Selvan, Madeswaran; Wu, Zhen; Latif, Zahid; Xuan, Yang; Khan, Omar; Garbern, James; Grossman, Robert I.
2009-01-01
Purpose To investigate whether the variable forms of putative iron deposition seen with susceptibility weighted imaging (SWI) will lead to a set of multiple sclerosis (MS) lesion characteristics different than that seen in conventional MR imaging. Materials and Methods Twenty-seven clinically definite MS patients underwent brain scans using magnetic resonance imaging including: pre- and post-contrast T1-weighted, T2-weighted, FLAIR, and SWI at 1.5T, 3T and 4T. MS lesions were identified separately in each imaging sequence. Lesions identified in SWI were re-evaluated for their iron content using the SWI filtered phase images. Results There were a variety of new lesion characteristics identified by SWI and these were classified into six types. A total of 75 lesions were seen only with conventional imaging, 143 only with SWI and 204 by both. From the iron quantification measurements, a moderate linear correlation between signal intensity and iron content (phase) was established. Conclusion The amount of iron deposition in the brain may serve as a surrogate biomarker for different MS lesion characteristics. SWI showed many lesions missed by conventional methods and six different lesion characteristics. SWI was particularly effective at recognizing the presence of iron in MS lesions and in the basal ganglia and pulvinar thalamus. PMID:19243035
Alves, L P S; Almeida, A T; Cruz, L M; Pedrosa, F O; de Souza, E M; Chubatsu, L S; Müller-Santos, M; Valdameri, G
2017-01-16
The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots.
Criteria for mitral regurgitation classification were inadequate for dilated cardiomyopathy.
Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; Paola, Angelo Amato Vincenzo de; Carvalho, Antonio Carlos Camargo; Campos, Orlando
2013-11-01
Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa=0.11; p<0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p<0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) CONCLUSION: The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients.
Micro-scale temperature measurement method using fluorescence polarization
NASA Astrophysics Data System (ADS)
Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.
2016-09-01
A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.
Fischer, Samuel; Kittler, Sophie; Klein, Günter; Glünder, Gerhard
2013-01-01
A simple susceptibility test using 800 isolates of one Campylobacter strain with different degrees of susceptibility and four bacteriophages of the British phage typing scheme was developed and examined for its suitability. The test presented is economically cheaper and less time consuming than the conventional agar overlay plate assay and therefore enables the monitoring of changes in the susceptibility pattern during phage therapy under practical field conditions. The main objective of this study was to compare the simplified test with the conventional agar overlay plate assay. The conventional test describes for a population of Campylobacter: i. the rate of resistant isolates (0 plaques) and ii. the degree of susceptibility, also called relative efficiency of plating (EOP), for the remaining isolates. The simplified test divides the isolates into four susceptibility ranks, which are easily distinguishable to the naked eye. Ten Campylobacter isolates out of each rank were subjected to the conventional method for validation of the simplified test. Each resistance rank contained isolates showing certain degrees of susceptibility, reflecting decreasing susceptibility by an increase of the rank. Thus, the simplified test correlated well with the conventional method. Nevertheless, it can be suggested for a clear cut to summarise the first thee ranks as “high susceptible” and to mark out the fourth rank as reduced susceptible. Further test improvements will enable the monitoring of the degree of susceptibility and potentially also of resistance during phage therapy in the field. To ensure a long-lasting successful use of phage therapy, further studies on both the loss of susceptibility and the development of resistance of Campylobacter against phages combined with their impact on phage therapy will be necessary. PMID:23349761
Shim, Jocelyne; Heo, Giseon; Lagravère, Manuel O
2013-10-01
The study compares growth changes of hyoid bone in cone-beam computed tomography (CBCT) with conventional skeletal maturation methods to examine their potential implications in the development of a three-dimensional method. Subjects (n = 62, 11-17 years of age) were exposed to CBCT at a six-month interval (T1/T2/T3). Ten-hyoid distances were compared with age, hand wrist skeletal maturation index (SMI), and cervical vertebral maturation stage (CS). The length of greater cornua (GC) was most frequently, moderate to highly correlated with age (right: 0.57/0.53/0.58; left: 0.45/0.50/0.48), SMI (right: 0.52/0.40/0.45; left: 0.42 at T3), and CS (right: 0.52 at T1), followed by the length of the hyoid bone with age (right: 0.50/0.49/0.47; left: 0.44/0.47 at T1/T2), SMI (right: 0.45/0.41 at T1/T2), and CS (right: 0.48 at T1). The width of body of the hyoid (HB) width was correlated with age (0.43/0.44/0.44). The GC-HB gap was correlated with age (right: -0.41 at T3) and SMI (right: -0.42 at T1). Peripubertal hyoid maturation did not yield sufficient diagnostic information for considerations in the development of a 3D-skeletal maturation method. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seino, Junji; Nakai, Hiromi, E-mail: nakai@waseda.jp; Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555
In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas–Kroll–Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys.136, 244102 (2012); J. Seino and H. Nakai, J. Chem. Phys.137, 144101 (2012)], which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree–Fock (HF) and electron correlation methods, such as the second-order Møller–Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX){sub n} (X = F, Cl, Br, and I), coinage metal chain systems,more » M{sub n} (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}(C{sub 6}H{sub 5})Pt(C≡C){sub 4}Pt(C{sub 6}H{sub 5})((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.« less
Field assessment of alternative bed-load transport estimators
Gaeuman, G.; Jacobson, R.B.
2007-01-01
Measurement of near-bed sediment velocities with acoustic Doppler current profilers (ADCPs) is an emerging approach for quantifying bed-load sediment fluxes in rivers. Previous investigations of the technique have relied on conventional physical bed-load sampling to provide reference transport information with which to validate the ADCP measurements. However, physical samples are subject to substantial errors, especially under field conditions in which surrogate methods are most needed. Comparisons between ADCP bed velocity measurements with bed-load transport rates estimated from bed-form migration rates in the lower Missouri River show a strong correlation between the two surrogate measures over a wide range of mild to moderately intense sediment transporting conditions. The correlation between the ADCP measurements and physical bed-load samples is comparatively poor, suggesting that physical bed-load sampling is ineffective for ground-truthing alternative techniques in large sand-bed rivers. Bed velocities measured in this study became more variable with increasing bed-form wavelength at higher shear stresses. Under these conditions, bed-form dimensions greatly exceed the region of the bed ensonified by the ADCP, and the magnitude of the acoustic measurements depends on instrument location with respect to bed-form crests and troughs. Alternative algorithms for estimating bed-load transport from paired longitudinal profiles of bed topography were evaluated. An algorithm based on the routing of local erosion and deposition volumes that eliminates the need to identify individual bed forms was found to give results similar to those of more conventional dune-tracking methods. This method is particularly useful in cases where complex bed-form morphology makes delineation of individual bed forms difficult. ?? 2007 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, W.A.
1982-12-01
Conventional methods of assessing antibacterial activities of macrophages by viable counting are limited by the precision of the statistics and are difficult to interpret quantitatively because of unrestrained extracellular growth of bacteria. An alternative technique based on the release of radioactive DNA from labeled bacteria has been offered as overcoming these drawbacks. To assess it for use with macrophages I have made a correlation with the conventional viable counting method using a mathematical model. Opsonized Listeria monocytogenes labeled with /sup 3/H-thymidine were exposed to rat macrophages for periods up to 4 hr. Numbers of viable bacteria determined after sonication increasedmore » exponentially in the absence of live cells and this growth rate was progressively inhibited by increasing numbers of macrophages. After a lag period of 30-60 min soluble /sup 3/H appeared in the supernatant, the amount increasing with time and numbers of macrophages. To correlate these data I developed a mathematical model that considered that changes in numbers of viable organisms were due to the difference between rates of 1) growth of extracellular bacteria and 2) killing within the macrophage. On the basis of this model curves of best fit to the viable counts data were used to predict the release of radioactivity, assuming that death of a bacterium led to the total release of its label. These predictions and the experimental data agreed well, the lag period of 30-60 min between death of the bacterium and release of radioactivity being consistent with intracellular digestion. Release of soluble radioactivity appears to be an accurate reflection of the number of bacteria killed within the macrophage.« less
NASA Astrophysics Data System (ADS)
Saadatmand, S. N.; Bartlett, S. D.; McCulloch, I. P.
2018-04-01
Obtaining quantitative ground-state behavior for geometrically-frustrated quantum magnets with long-range interactions is challenging for numerical methods. Here, we demonstrate that the ground states of these systems on two-dimensional lattices can be efficiently obtained using state-of-the-art translation-invariant variants of matrix product states and density-matrix renormalization-group algorithms. We use these methods to calculate the fully-quantitative ground-state phase diagram of the long-range interacting triangular Ising model with a transverse field on six-leg infinite-length cylinders and scrutinize the properties of the detected phases. We compare these results with those of the corresponding nearest neighbor model. Our results suggest that, for such long-range Hamiltonians, the long-range quantum fluctuations always lead to long-range correlations, where correlators exhibit power-law decays instead of the conventional exponential drops observed for short-range correlated gapped phases. Our results are relevant for comparisons with recent ion-trap quantum simulator experiments that demonstrate highly-controllable long-range spin couplings for several hundred ions.
Topological electronic liquids: Electronic physics of one dimension beyond the one spatial dimension
NASA Astrophysics Data System (ADS)
Wiegmann, P. B.
1999-06-01
There is a class of electronic liquids in dimensions greater than 1 that shows all essential properties of one-dimensional electronic physics. These are topological liquids-correlated electronic systems with a spectral flow. Compressible topological electronic liquids are superfluids. In this paper we present a study of a conventional model of a topological superfluid in two spatial dimensions. This model is thought to be relevant to a doped Mott insulator. We show how the spectral flow leads to the superfluid hydrodynamics and how the orthogonality catastrophe affects off-diagonal matrix elements. We also compute the major electronic correlation functions. Among them are the spectral function, the pair wave function, and various tunneling amplitudes. To compute correlation functions we develop a method of current algebra-an extension of the bosonization technique of one spatial dimension. In order to emphasize a similarity between electronic liquids in one dimension and topological liquids in dimensions greater than 1, we first review the Fröhlich-Peierls mechanism of ideal conductivity in one dimension and then extend the physics and the methods into two spatial dimensions.
Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu
2011-04-01
A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.
Bhattacharya, D; Bhattacharya, R; Dhar, T K
1999-11-19
In an earlier communication we have described a novel signal amplification technology termed Super-CARD, which is able to significantly improve antigen detection sensitivity in conventional Dot-ELISA by approximately 10(5)-fold. The method utilizes hitherto unreported synthesized electron rich proteins containing multiple phenolic groups which, when immobilized over a solid phase as blocking agent, markedly increases the signal amplification capability of the existing CARD method (Bhattacharya, R., Bhattacharya, D., Dhar, T.K., 1999. A novel signal amplification technology based on catalyzed reporter deposition and its application in a Dot-ELISA with ultra high sensitivity. J. Immunol. Methods 227, 31.). In this paper we describe the utilization of this Super-CARD amplification technique in ELISA and its applicability for the rapid determination of aflatoxin B(1) (AFB(1)) in infected seeds. Using this method under identical conditions, the increase in absorbance over the CARD method was approximately 400%. The limit of detection of AFB(1) by this method was 0.1 pg/well, the sensitivity enhancement being 5-fold over the optimized CARD ELISA. Furthermore, the total incubation time was reduced to 16 min compared to 50 min for the CARD method. Assay specificity was not adversely affected and the amount of AFB(1) measured in seed extracts correlated well with the values obtained by conventional ELISA.
NASA Astrophysics Data System (ADS)
Jafari, M.; Cao, S. C.; Jung, J.
2017-12-01
Goelogical CO2 sequestration (GCS) has been recently introduced as an effective method to mitigate carbon dioxide emission. CO2 from main producer sources is collected and then is injected underground formations layers to be stored for thousands to millions years. A safe and economical storage project depends on having an insight of trapping mechanisms, fluids dynamics, and interaction of fluids-rocks. Among different forces governing fluids mobility and distribution in GCS condition, capillary pressure is of importance, which, in turn, wettability (measured by contact angel (CA)) is the most controversial parameters affecting it. To explore the sources of discrepancy in the literature for CA measurement, we conducted a series of conventional captive bubble test on glass plates under high pressure condition. By introducing a shape factor, we concluded that surface imperfection can distort the results in such tests. Since the conventional methods of measuring the CA is affected by gravity and scale effect, we introduced a different technique to measure pore-scale CA inside a transparent glass microchip. Our method has the ability to consider pore sizes and simulate static and dynamics CA during dewetting and imbibition. Glass plates shows a water-wet behavior (CA 30° - 45°) by a conventional experiment consistent with literature. However, CA of miniature bubbles inside of the micromodel can have a weaker water-wet behavior (CA 55° - 69°). In a more realistic pore-scale condition, water- CO2 interface covers whole width of a pore throats. Under this condition, the receding CA, which is used for injectability and capillary breakthrough pressure, increases with decreasing pores size. On the other hand, advancing CA, which is important for residual or capillary trapping, does not show a correlation with throat sizes. Static CA measured in the pores during dewetting is lower than static CA on flat plate, but it is much higher when measured during imbibition implying weaker water-wet behavior. Pore-scale CA, which realistically represents rocks wettability behavior, shows weaker water-wet behavior than conventional measurement methods, which must be considered for safety of geological storage.
Grouiller, Frédéric; Thornton, Rachel C.; Groening, Kristina; Spinelli, Laurent; Duncan, John S.; Schaller, Karl; Siniatchkin, Michael; Lemieux, Louis; Seeck, Margitta; Michel, Christoph M.
2011-01-01
In patients with medically refractory focal epilepsy who are candidates for epilepsy surgery, concordant non-invasive neuroimaging data are useful to guide invasive electroencephalographic recordings or surgical resection. Simultaneous electroencephalography and functional magnetic resonance imaging recordings can reveal regions of haemodynamic fluctuations related to epileptic activity and help localize its generators. However, many of these studies (40–70%) remain inconclusive, principally due to the absence of interictal epileptiform discharges during simultaneous recordings, or lack of haemodynamic changes correlated to interictal epileptiform discharges. We investigated whether the presence of epilepsy-specific voltage maps on scalp electroencephalography correlated with haemodynamic changes and could help localize the epileptic focus. In 23 patients with focal epilepsy, we built epilepsy-specific electroencephalographic voltage maps using averaged interictal epileptiform discharges recorded during long-term clinical monitoring outside the scanner and computed the correlation of this map with the electroencephalographic recordings in the scanner for each time frame. The time course of this correlation coefficient was used as a regressor for functional magnetic resonance imaging analysis to map haemodynamic changes related to these epilepsy-specific maps (topography-related haemodynamic changes). The method was first validated in five patients with significant haemodynamic changes correlated to interictal epileptiform discharges on conventional analysis. We then applied the method to 18 patients who had inconclusive simultaneous electroencephalography and functional magnetic resonance imaging studies due to the absence of interictal epileptiform discharges or absence of significant correlated haemodynamic changes. The concordance of the results with subsequent intracranial electroencephalography and/or resection area in patients who were seizure free after surgery was assessed. In the validation group, haemodynamic changes correlated to voltage maps were similar to those obtained with conventional analysis in 5/5 patients. In 14/18 patients (78%) with previously inconclusive studies, scalp maps related to epileptic activity had haemodynamic correlates even when no interictal epileptiform discharges were detected during simultaneous recordings. Haemodynamic changes correlated to voltage maps were spatially concordant with intracranial electroencephalography or with the resection area. We found better concordance in patients with lateral temporal and extratemporal neocortical epilepsy compared to medial/polar temporal lobe epilepsy, probably due to the fact that electroencephalographic voltage maps specific to lateral temporal and extratemporal epileptic activity are more dissimilar to maps of physiological activity. Our approach significantly increases the yield of simultaneous electroencephalography and functional magnetic resonance imaging to localize the epileptic focus non-invasively, allowing better targeting for surgical resection or implantation of intracranial electrode arrays. PMID:21752790
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
Laser Speckle Rheology for evaluating the viscoelastic properties of hydrogel scaffolds
Hajjarian, Zeinab; Nia, Hadi Tavakoli; Ahn, Shawn; Grodzinsky, Alan J.; Jain, Rakesh K.; Nadkarni, Seemantini K.
2016-01-01
Natural and synthetic hydrogel scaffolds exhibit distinct viscoelastic properties at various length scales and deformation rates. Laser Speckle Rheology (LSR) offers a novel, non-contact optical approach for evaluating the frequency-dependent viscoelastic properties of hydrogels. In LSR, a coherent laser beam illuminates the specimen and a high-speed camera acquires the time-varying speckle images. Cross-correlation analysis of frames returns the speckle intensity autocorrelation function, g2(t), from which the frequency-dependent viscoelastic modulus, G*(ω), is deduced. Here, we establish the capability of LSR for evaluating the viscoelastic properties of hydrogels over a large range of moduli, using conventional mechanical rheometry and atomic force microscopy (AFM)-based indentation as reference-standards. Results demonstrate a strong correlation between |G*(ω)| values measured by LSR and mechanical rheometry (r = 0.95, p < 10−9), and z-test analysis reports that moduli values measured by the two methods are identical (p > 0.08) over a large range (47 Pa – 36 kPa). In addition, |G*(ω)| values measured by LSR correlate well with indentation moduli, E, reported by AFM (r = 0.92, p < 10−7). Further, spatially-resolved moduli measurements in micro-patterned substrates demonstrate that LSR combines the strengths of conventional rheology and micro-indentation in assessing hydrogel viscoelastic properties at multiple frequencies and small length-scales. PMID:27905494
Laser Speckle Rheology for evaluating the viscoelastic properties of hydrogel scaffolds.
Hajjarian, Zeinab; Nia, Hadi Tavakoli; Ahn, Shawn; Grodzinsky, Alan J; Jain, Rakesh K; Nadkarni, Seemantini K
2016-12-01
Natural and synthetic hydrogel scaffolds exhibit distinct viscoelastic properties at various length scales and deformation rates. Laser Speckle Rheology (LSR) offers a novel, non-contact optical approach for evaluating the frequency-dependent viscoelastic properties of hydrogels. In LSR, a coherent laser beam illuminates the specimen and a high-speed camera acquires the time-varying speckle images. Cross-correlation analysis of frames returns the speckle intensity autocorrelation function, g 2 (t), from which the frequency-dependent viscoelastic modulus, G*(ω), is deduced. Here, we establish the capability of LSR for evaluating the viscoelastic properties of hydrogels over a large range of moduli, using conventional mechanical rheometry and atomic force microscopy (AFM)-based indentation as reference-standards. Results demonstrate a strong correlation between |G*(ω)| values measured by LSR and mechanical rheometry (r = 0.95, p < 10 -9 ), and z-test analysis reports that moduli values measured by the two methods are identical (p > 0.08) over a large range (47 Pa - 36 kPa). In addition, |G*(ω)| values measured by LSR correlate well with indentation moduli, E, reported by AFM (r = 0.92, p < 10 -7 ). Further, spatially-resolved moduli measurements in micro-patterned substrates demonstrate that LSR combines the strengths of conventional rheology and micro-indentation in assessing hydrogel viscoelastic properties at multiple frequencies and small length-scales.
Takahashi, Hiro; Honda, Hiroyuki
2006-07-01
Considering the recent advances in and the benefits of DNA microarray technologies, many gene filtering approaches have been employed for the diagnosis and prognosis of diseases. In our previous study, we developed a new filtering method, namely, the projective adaptive resonance theory (PART) filtering method. This method was effective in subclass discrimination. In the PART algorithm, the genes with a low variance in gene expression in either class, not both classes, were selected as important genes for modeling. Based on this concept, we developed novel simple filtering methods such as modified signal-to-noise (S2N') in the present study. The discrimination model constructed using these methods showed higher accuracy with higher reproducibility as compared with many conventional filtering methods, including the t-test, S2N, NSC and SAM. The reproducibility of prediction was evaluated based on the correlation between the sets of U-test p-values on randomly divided datasets. With respect to leukemia, lymphoma and breast cancer, the correlation was high; a difference of >0.13 was obtained by the constructed model by using <50 genes selected by S2N'. Improvement was higher in the smaller genes and such higher correlation was observed when t-test, NSC and SAM were used. These results suggest that these modified methods, such as S2N', have high potential to function as new methods for marker gene selection in cancer diagnosis using DNA microarray data. Software is available upon request.
Vision based tunnel inspection using non-rigid registration
NASA Astrophysics Data System (ADS)
Badshah, Amir; Ullah, Shan; Shahzad, Danish
2015-04-01
Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.
NASA Astrophysics Data System (ADS)
Mensi, Walid; Hamdi, Atef; Shahzad, Syed Jawad Hussain; Shafiullah, Muhammad; Al-Yahyaee, Khamis Hamed
2018-07-01
This paper analyzes the dynamic efficiency and interdependence of Islamic and conventional banks of Saudi Arabia. This analysis applies the Multifractal Detrended Fluctuation Analysis (MF-DFA) and Multifractal Detrended Cross-Correlation Analysis (MF-DXA) approaches. The MF-DFA results show strong multifractality in the daily returns of Saudi banks. Moreover, all eight banks studied exhibit persistence correlation, which demonstrates inefficiency. The rolling window results show significant change in the inefficiency levels over the time. The cross-correlation analysis between bank-pairs exhibits long term interdependence between most of them. These findings indicate that the banking sector in Saudi Arabia suffers from inefficiency and exhibits long term memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B.A.; Heiserman, J.E.; Drayer, B.P.
To determine the contribution of cranial MR angiography (MRA) for the evaluation of patients with acute and subacute brain infarction. MR and MRA studies performed on 78 adult patients with acute and subacute stroke were retrospectively reviewed and correlated with the clinical records. There were 50 acute and 28 subacute infarctions in our series. Five of 78 MRA exams (6%) were nondiagnostic. Sixty examinations (80%) were positive for stenosis or occlusion. The distribution of stenotic or occlusive vascular lesions correlated with the location of infarction in 56 of the 60 positive cases (93%). MRA provided information not obtained from themore » MR images in 40 cases (55%). One hundred four individual vessels in 8 patients who underwent conventional cerebral angiography were compared with the MRA appearance. The MRA interpretations correlated with the conventional angiographic evaluations for 90 vessels (87%). Vascular lesions demonstrated on intracranial MRA show a high correlation with infarct distribution. MRA provides information adjunctive to conventional MR in a majority of cases. We conclude that MRA is an important component of the complete evaluation of brain infarction. 39 refs., 3 figs., 2 tabs.« less
SU-E-T-550: Modulation Index for VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J; Park, S; Kim, J
2015-06-15
Purpose: To present modulation indices (MIs) for volumetric modulated arc therapy (VMAT). Methods: A total of 40 VMAT plans were retrospectively selected. To investigate the delivery accuracy of each VMAT plan, gamma passing rates, differences in modulating parameters between plans and log files, and differences between the original plans and the plans reconstructed with the log files were acquired. A modulation index (MIt) was designed by multiplications of the weighted quantifications of MLC speeds, MLC accelerations, gantry accelerations and dose-rate variations. Textural features including angular second moment, inverse difference moment, contrast, variance, correlation and entropy were calculated from the fluencesmore » of each VMAT plan. To test the performance of suggested MIs, Spearman’s rank correlation coefficients (r) with the plan delivery accuracy were calculated. Conventional modulation indices for VMAT including the modulation complexity score for VMAT (MCSv), leaf travel modulation complexity score (LTMCS) and MI by Li & Xing were calculated, and their correlations were also analyzed in the same way. Results: The r values of contrast (particular displacement distance, d = 1), variance (d = 1), MIt, MCSv, LTMCS and MI by Li&Xing to the local gamma passing rates with 2%/2 mm were 0.547 (p < 0.001), 0.519 (p < 0.001), −0.658 (p < 0.001), 0.186 (p = 0.251), 0.312 (p = 0.05) and −0.455 (p = 0.003), respectively. The r values of those to the MLC errors were −0.863, −0.828, 0.917, −0.635, − 0.857 and 0.795, respectively (p < 0.001). For dose-volumetric parameters, MIt showed higher statistically significant correlations than did the conventional modulation indices. Conclusion: The MIt, contrast (d = 1) and variance (d = 1) showed good performance to predict the VMAT delivery accuracy showing higher correlations to the results of various types of verification methods for VMAT. This work was in part supported by the National Research Foundation of Korea (NRF) grant (no. 490-20140029 and no. 490-20130047) funded by the Korea government.« less
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Clinical and imaging assessment of acute combat mild traumatic brain injury in Afghanistan
Mac Donald, Christine L.; Rivet, Dennis; Ritter, John; May, Todd; Barefield, Maria; Duckworth, Josh; LaBarge, Donald; Asher, Dean; Drinkwine, Benjamin; Woods, Yvette; Connor, Michael; Brody, David L.
2015-01-01
Objective: To evaluate whether diffusion tensor imaging (DTI) will noninvasively reveal white matter changes not present on conventional MRI in acute blast-related mild traumatic brain injury (mTBI) and to determine correlations with clinical measures and recovery. Methods: Prospective observational study of 95 US military service members with mTBI enrolled within 7 days from injury in Afghanistan and 101 healthy controls. Assessments included Rivermead Post-Concussion Symptoms Questionnaire (RPCSQ), Post-Traumatic Stress Disorder Checklist Military (PCLM), Beck Depression Inventory (BDI), Balance Error Scoring System (BESS), Automated Neuropsychological Assessment Metrics (ANAM), conventional MRI, and DTI. Results: Significantly greater impairment was observed in participants with mTBI vs controls: RPCSQ (19.7 ± 12.9 vs 3.6 ± 7.1, p < 0.001), PCLM (32 ± 13.2 vs 20.9 ± 7.1, p < 0.001), BDI (7.4 ± 6.8 vs 2.5 ± 4.9, p < 0.001), and BESS (18.2 ± 8.4 vs 15.1 ± 8.3, p = 0.01). The largest effect size in ANAM performance decline was in simple reaction time (mTBI 74.5 ± 148.4 vs control −11 ± 46.6 milliseconds, p < 0.001). Fractional anisotropy was significantly reduced in mTBI compared with controls in the right superior longitudinal fasciculus (0.393 ± 0.022 vs 0.405 ± 0.023, p < 0.001). No abnormalities were detected with conventional MRI. Time to return to duty correlated with RPCSQ (r = 0.53, p < 0.001), ANAM simple reaction time decline (r = 0.49, p < 0.0001), PCLM (r = 0.47, p < 0.0001), and BDI (r = 0.36 p = 0.0005). Conclusions: Somatic, behavioral, and cognitive symptoms and performance deficits are substantially elevated in acute blast-related mTBI. Postconcussive symptoms and performance on measures of posttraumatic stress disorder, depression, and neurocognitive performance at initial presentation correlate with return-to-duty time. Although changes in fractional anisotropy are uncommon and subtle, DTI is more sensitive than conventional MRI in imaging white matter integrity in blast-related mTBI acutely. PMID:26109715
Fotiou, Anastasios; Kanavou, Eleftheria; Stavrou, Myrto; Richardson, Clive; Kokkevi, Anna
2015-12-01
This study reports the prevalence of electronic cigarette (e-cigarette) use among adolescents in Greece and explores how dual smokers of e-cigarettes and combustible (conventional) cigarettes differ from smokers of only combustible cigarettes across socio-demographic, familial, psychosomatic health and substance use characteristics. Self-reports on smoking were collected from a nationally representative sample of 1320 15-year-old Greek students in the 2014 Health Behaviour in School-aged Children study. Bivariate and multivariable logistic regression analyses were carried out with dependent variables a) lifetime smoking conventional cigarettes and b) lifetime e-cigarette use among lifetime smokers. About 36.9% of 15-year-olds reported lifetime smoking of conventional cigarettes, and 16.6% lifetime use of e-cigarettes, mostly experimenting (0.5% reported current e-cigarette use). Six in 7 ever e-cigarette smokers had smoked conventional cigarettes. Peers who smoke and lifetime cannabis use were significant correlates of both lifetime conventional cigarette and e-cigarette smoking, but more strongly for smoking conventional cigarettes. Alcohol use and low parental monitoring correlated with tobacco smoking but not e-cigarette use. Girls were more likely than boys to report lifetime use of tobacco, but, among lifetime smokers, boys had almost seven times the odds of girls of e-cigarette use. In lifetime smokers, low life satisfaction in females and current smoking of conventional tobacco were independently associated with the experimentation with e-cigarettes. Experimental use of e-cigarettes is relatively widespread among adolescents in Greece. Targeted interventions should focus on male smokers and the role of peer processes and cannabis use in the risk of experimenting with e-cigarettes. Copyright © 2015 Elsevier Ltd. All rights reserved.
A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.
Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.
A Model-Based Spike Sorting Algorithm for Removing Correlation Artifacts in Multi-Neuron Recordings
Chichilnisky, E. J.; Simoncelli, Eero P.
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call “binary pursuit”. The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth. PMID:23671583
Abrosimov, Alexander; Saenko, Vladimir; Meirmanov, Serik; Nakashima, Masahiro; Rogounovitch, Tatiana; Shkurko, Olesya; Lushnikov, Eugeny; Mitsutake, Norisato; Namba, Hiroyuki; Yamashita, Shunichi
2007-01-01
This study addressed the immunohistochemical expression of MUC1 in papillary thyroid carcinoma (PTC) of different histotypes, sizes, and morphological features of aggressiveness, and its correlation with the overexpression of cyclin D1, a target molecule of the Wnt pathway. MUC1 expression was examined in a total of 209 PTCs. Cytoplasmic MUC1 expression was elevated in the tall, columnar cell and oncocytic variants (100%), Warthin-like (78%), and conventional PTCs (61%), and in papillary microcarcinoma (PMC) with the conventional growth pattern (52%). On the contrary, it was low in the follicular variant (27%) of PTC and PMCs with follicular architecture (13%). Cytoplasmic MUC1 accumulation did not associate with any clinicopathological features except peritumoral lymphoid infiltration in PTCs and in PMCs with the conventional growth pattern. MUC1 staining correlated with cyclin D1 overexpression in conventional PTCs and PMCs and PMCs with follicular architecture. The results demonstrate that MUC1 expression varies broadly in different histological variants of PTC, being the lowest in tumors with follicular structure. In general, it does not prove to be a prognosticator of PTC aggressiveness. A high correlation between MUC1 and cyclin D1 implies MUC1 involvement in the Wnt cascade functioning in a large subset of human PTCs and PMCs.
Dahlström, C; Allem, R; Uesaka, T
2011-02-01
We have developed a new method for characterizing microstructures of paper coating using argon ion beam milling technique and field emission scanning electron microscopy. The combination of these two techniques produces extremely high-quality images with very few artefacts, which are particularly suited for quantitative analyses of coating structures. A new evaluation method has been developed by using marker-controlled watershed segmentation technique of the secondary electron images. The high-quality secondary electron images with well-defined pores makes it possible to use this semi-automatic segmentation method. One advantage of using secondary electron images instead of backscattered electron images is being able to avoid possible overestimation of the porosity because of the signal depth. A comparison was made between the new method and the conventional method using greyscale histogram thresholding of backscattered electron images. The results showed that the conventional method overestimated the pore area by 20% and detected around 5% more pores than the new method. As examples of the application of the new method, we have investigated the distributions of coating binders, and the relationship between local coating porosity and base sheet structures. The technique revealed, for the first time with direct evidence, the long-suspected coating non-uniformity, i.e. binder migration, and the correlation between coating porosity versus base sheet mass density, in a straightforward way. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
NASA Astrophysics Data System (ADS)
Banishev, A. A.; Vrzheshch, E. P.; Shirshin, E. A.
2009-03-01
Individual photophysical parameters of the chromophore of a fluorescent protein mRFP1 and its two mutants (amino-acid substitution at position 66 - mRFP1/ Q66C and mRFP1/Q66S proteins) are determined. For this purpose, apart from conventional methods of fluorimetry and spectrophotometry, nonlinear laser fluorimetry is used. It is shown that the individual extinction coefficients of the chromophore of proteins correlate (correlation coefficient above 0.9) with the volume of the substituted amino-acid residue at position 66 (similar to the positions of the absorption, fluorescence excitation and emission maxima).
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
Monitoring of pollutant gases in aircraft exhausts by gas-filter correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gryvnak, D.A.; Burch, D.E.
1976-01-01
An infrared instrument using a gas-filter correlation technique was used to monitor NO and CO by looking across the exhaust plume of a T56 jet engine combustor. The instrument, built previously by Aeronutronic Ford for EPA to monitor pollutant gases in smokestack exhausts, was modified for use on the combustor. Temperatures and concentrations ranged from 300 to 930 K and up to 130 ppM for NO, and from 300 to 550/sup 0/K and up to 220 ppM for CO. The infrared results compared reasonably well with results that were obtained simultaneously by withdrawing the sample using probe techniques and analyzingmore » the gas with a conventional gas analyzer.« less
How device-independent approaches change the meaning of physical theory
NASA Astrophysics Data System (ADS)
Grinbaum, Alexei
2017-05-01
Dirac sought an interpretation of mathematical formalism in terms of physical entities and Einstein insisted that physics should describe ;the real states of the real systems;. While Bell inequalities put into question the reality of states, modern device-independent approaches do away with the idea of entities: physical theory may contain no physical systems. Focusing on the correlations between operationally defined inputs and outputs, device-independent methods promote a view more distant from the conventional one than Einstein's 'principle theories' were from 'constructive theories'. On the examples of indefinite causal orders and almost quantum correlations, we ask a puzzling question: if physical theory is not about systems, then what is it about? Device-independent models suggest that physical theory can be 'about' languages.
Sparse models for correlative and integrative analysis of imaging and genetic data
Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.
2014-01-01
The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561
NASA Astrophysics Data System (ADS)
Subhash, Hrebesh M.; O'Gorman, Sean; Neuhaus, Kai; Leahy, Martin
2014-03-01
In this paper we demonstrate a novel application of correlation mapping optical coherence tomography (cm-OCT) for volumetric nailfold capillaroscopy (NFC). NFC is a widely used non-invasive diagnostic method to analyze capillary morphology and microvascular abnormalities of nailfold area for a range of disease conditions. However, the conventional NFC is incapable of providing volumetric imaging, when volumetric quantitative microangiopathic parameters such as plexus morphology, capillary density, and morphologic anomalies of the end row loops most critical. cm-OCT is a recently developed well established coherence domain magnitude based angiographic modality, which takes advantage of the time-varying speckle effect, which is normally dominant in the vicinity of vascular regions compared to static tissue region. It utilizes the correlation coefficient as a direct measurement of decorrelation between two adjacent B-frames to enhance the visibility of depth-resolved microcirculation.
Application of selected methods of remote sensing for detecting carbonaceous water pollution
NASA Technical Reports Server (NTRS)
Davis, E. M.; Fosbury, W. J.
1973-01-01
A reach of the Houston Ship Channel was investigated during three separate overflights correlated with ground truth sampling on the Channel. Samples were analyzed for such conventional parameters as biochemical oxygen demand, chemical oxygen demand, total organic carbon, total inorganic carbon, turbidity, chlorophyll, pH, temperature, dissolved oxygen, and light penetration. Infrared analyses conducted on each sample included reflectance ATR analysis, carbon tetrachloride extraction of organics and subsequent scanning, and KBr evaporate analysis of CCl4 extract concentrate. Imagery which was correlated with field and laboratory data developed from ground truth sampling included that obtained from aerial KA62 hardware, RC-8 metric camera systems, and the RS-14 infrared scanner. The images were subjected to analysis by three film density gradient interpretation units. Data were then analyzed for correlations between imagery interpretation as derived from the three instruments and laboratory infrared signatures and other pertinent field and laboratory analyses.
Chauhan, Amit Kumar; Bhatia, Rohan; Agrawal, Sanjay
2018-01-01
The objective of the present study was to evaluate the skin-epidural space distance as assessed by ultrasonography and conventional loss of resistance (LOR) technique and to find the correlation of epidural depth with body mass index (BMI). Ninety-eight patients of either sex, American Society of Anesthesiology I/II, BMI <30 kg/m 2 requiring lumbar epidural for surgery were enrolled. The epidural space was assessed with a curvilinear ultrasound (US) probe, 2-5 MHz, in the transverse plane at L3-L4 intervertebral space. Thereafter, the epidural depth from skin was assessed with conventional LOR method while performing the epidural. The needle depth (ND) was measured using a sterile linear scale, and any change in the needle direction or intervertebral space was noted. The patients were demographically similar. Depth of epidural space measured by US depth (UD) was 3.96 ± 0.44 cm (range 3.18-5.44 cm) and by ND was 4.04 ± 0.52 cm (range 2.7-5.7 cm). The Pearson's correlation coefficient (r) between UD and ND was 0.935 (95% confidence interval: 0.72-0.92, r 2 = 0.874, P < 0.001), and Bland-Altman analysis revealed the 95% limits of agreement -0.494-0.652 cm. The present study demonstrates a good correlation between UD and ND and shows that the preprocedural US scan in transverse plane provides accurate needle entry site with a high success rate in single attempt for lumbar epidurals in patients with a BMI <30 kg/m 2 .
Carvalho-Oliveira, Regiani; Amato-Lourenço, Luís F; Moreira, Tiana C L; Silva, Douglas R Rocha; Vieira, Bruna D; Mauad, Thais; Saiki, Mitiko; Saldiva, Paulo H Nascimento
2017-02-01
The majority of epidemiological studies correlate the cardiorespiratory effects of air pollution exposure by considering the concentrations of pollutants measured from conventional monitoring networks. The conventional air quality monitoring methods are expensive, and their data are insufficient for providing good spatial resolution. We hypothesized that bioassays using plants could effectively determine pollutant gradients, thus helping to assess the risks associated with air pollution exposure. The study regions were determined from different prevalent respiratory death distributions in the Sao Paulo municipality. Samples of tree flower buds were collected from twelve sites in four regional districts. The genotoxic effects caused by air pollution were tested through a pollen abortion bioassay. Elements derived from vehicular traffic that accumulated in tree barks were determined using energy-dispersive X-ray fluorescence spectrometry (EDXRF). Mortality data were collected from the mortality information program of Sao Paulo City. Principal component analysis (PCA) was applied to the concentrations of elements accumulated in tree barks. Pearson correlation and exponential regression were performed considering the elements, pollen abortion rates and mortality data. PCA identified five factors, of which four represented elements related to vehicular traffic. The elements Al, S, Fe, Mn, Cu, and Zn showed a strong correlation with mortality rates (R 2 >0.87) and pollen abortion rates (R 2 >0.82). These results demonstrate that tree barks and pollen abortion rates allow for correlations between vehicular traffic emissions and associated outcomes such as genotoxic effects and mortality data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
A combined evaluation of the characteristics and acute toxicity of antibiotic wastewater.
Yu, Xin; Zuo, Jiane; Li, Ruixia; Gan, Lili; Li, Zaixing; Zhang, Fei
2014-08-01
The conventional parameters and acute toxicities of antibiotic wastewater collected from each treatment unit of an antibiotic wastewater treatment plant have been investigated. The investigation of the conventional parameters indicated that the antibiotic wastewater treatment plant performed well under the significant fluctuation in influent water quality. The results of acute toxicity indicated that the toxicity of antibiotic wastewater could be reduced by 94.3 percent on average after treatment. However, treated antibiotic effluents were still toxic to Vibrio fischeri. The toxicity of antibiotic production wastewater could be attributed to the joint effects of toxic compound mixtures in wastewater. Moreover, aerobic biological treatment processes, including sequencing batch reactor (SBR) and aerobic biofilm reactor, played the most important role in reducing toxicity by 92.4 percent. Pearson׳s correlation coefficients revealed that toxicity had a strong and positive linear correlation with organic substances, nitrogenous compounds, S(2-), volatile phenol, cyanide, As, Zn, Cd, Ni and Fe. Ammonia nitrogen (NH4(+)) was the greatest contributor to toxicity according to the stepwise regression method. The multiple regression model was a good fit for [TU50-15 min] as a function of [NH₄(+)] with the determination coefficient of 0.981. Copyright © 2014 Elsevier Inc. All rights reserved.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
The conventional tuning fork as a quantitative tool for vibration threshold.
Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim
2018-01-01
This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.
Pomeroy, Valerie M; Ward, Nick S; Johansen-Berg, Heidi; van Vliet, Paulette; Burridge, Jane; Hunter, Susan M; Lemon, Roger N; Rothwell, John; Weir, Christopher J; Wing, Alan; Walker, Andrew A; Kennedy, Niamh; Barton, Garry; Greenwood, Richard J; McConnachie, Alex
2014-02-01
Functional strength training in addition to conventional physical therapy could enhance upper limb recovery early after stroke more than movement performance therapy plus conventional physical therapy. To determine (a) the relative clinical efficacy of conventional physical therapy combined with functional strength training and conventional physical therapy combined with movement performance therapy for upper limb recovery; (b) the neural correlates of response to conventional physical therapy combined with functional strength training and conventional physical therapy combined with movement performance therapy; (c) whether any one or combination of baseline measures predict motor improvement in response to conventional physical therapy combined with functional strength training or conventional physical therapy combined with movement performance therapy. Randomized, controlled, observer-blind trial. The sample will consist of 288 participants with upper limb paresis resulting from a stroke that occurred within the previous 60 days. All will be allocated to conventional physical therapy combined with functional strength training or conventional physical therapy combined with movement performance therapy. Functional strength training and movement performance therapy will be undertaken for up to 1·5 h/day, five-days/week for six-weeks. Measurements will be undertaken before randomization, six-weeks thereafter, and six-months after stroke. Primary efficacy outcome will be the Action Research Arm Test. Explanatory measurements will include voxel-wise estimates of brain activity during hand movement, brain white matter integrity (fractional anisotropy), and brain-muscle connectivity (e.g. latency of motor evoked potentials). The primary clinical efficacy analysis will compare treatment groups using a multilevel normal linear model adjusting for stratification variables and for which therapist administered the treatment. Effect of conventional physical therapy combined with functional strength training versus conventional physical therapy combined with movement performance therapy will be summarized using the adjusted mean difference and 95% confidence interval. To identify the neural correlates of improvement in both groups, we will investigate associations between change from baseline in clinical outcomes and each explanatory measure. To identify baseline measurements that independently predict motor improvement, we will develop a multiple regression model. © 2013 The Authors. International Journal of Stroke published by John Wiley & Sons Ltd on behalf of World Stroke Organization.
Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie
2017-11-01
While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.
Novel modes and adaptive block scanning order for intra prediction in AV1
NASA Astrophysics Data System (ADS)
Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz
2017-09-01
The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.
Wavelet-space correlation imaging for high-speed MRI without motion monitoring or data segmentation.
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2015-12-01
This study aims to (i) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and (ii) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called "wavelet-space correlation imaging", is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI, and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi
Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.
The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.
O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard
2010-08-01
The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.
Evaluation of maternal serum alpha-foetoprotein assay using dry blood spot samples.
González, C; Guerrero, J M; Elorza, F L; Molinero, P; Goberna, R
1988-02-01
The quantification of alpha-foetoprotein in dry blood spots from pregnant women was evaluated, using a conventional radioimmunoassay (RIA) with a monospecific antibody. The stability of alpha-foetoprotein in dry blood spots on filter paper was evaluated with respect to mailing, distances travelled, and the existence of high summer temperatures in our region. The results obtained show that the blood alpha-foetoprotein is stable on dry filter spots sent by mail and is stable for up to four weeks at 4, 25 and 37 degrees C. The analytical method used has a minimal detectable concentration of 10 +/- 1.9 international kilo-units/l. Both inter- and intra-assay variabilities are smaller than 10% and this method can provide results comparable with those of conventional serum assays. Results from dry blood spots and serum samples (the latter analysed by both RIA and two-site enzyme immunoassay) exhibited a good correlation (r = 0.98 and r = 0.97, p less than 0.001). The design of the assay and the nature of the samples make this method suitable for a screening programmes for the antenatal detection of open neural tube defects.
Hayashi, Kiyotada; Nagumo, Yoshifumi; Domoto, Akiko
2016-11-15
In comparative life cycle assessments of agricultural production systems, analyses of both the trade-offs between environmental impacts and crop productivity and of the uncertainties specific to agriculture such as fluctuations in greenhouse gas (GHG) emissions and crop yields are crucial. However, these two issues are usually analyzed separately. In this paper, we present a framework to link trade-off and uncertainty analyses; correlated uncertainties are integrated into environment-productivity trade-off analyses. We compared three rice production systems in Japan: a system using a pelletized, nitrogen-concentrated organic fertilizer made from poultry manure using closed-air composting techniques (high-N system), a system using a conventional organic fertilizer made from poultry manure using open-air composting techniques (low-N system), and a system using a chemical compound fertilizer (conventional system). We focused on two important sources of uncertainties in paddy rice cultivation-methane emissions from paddy fields and crop yields. We found trade-offs between the conventional and high-N systems and the low-N system and the existence of positively correlated uncertainties in the conventional and high-N systems. We concluded that our framework is effective in recommending the high-N system compared with the low-N system, although the performance of the former is almost the same as the conventional system. Copyright © 2016 Elsevier B.V. All rights reserved.
Boosting spin-caloritronic effects by attractive correlations in molecular junctions.
Weymann, Ireneusz
2016-01-25
In nanoscopic systems quantum confinement and interference can lead to an enhancement of thermoelectric properties as compared to conventional bulk materials. For nanostructures, such as molecules or quantum dots coupled to external leads, the thermoelectric figure of merit can reach or even exceed unity. Moreover, in the presence of external magnetic field or when the leads are ferromagnetic, an applied temperature gradient can generate a spin voltage and an associated spin current flow in the system, which makes such nanostructures particularly interesting for future thermoelectric applications. In this study, by using the numerical renormalization group method, we examine the spin-dependent thermoelectric transport properties of a molecular junction involving an orbital level with attractive Coulomb correlations coupled to ferromagnetic leads. We analyze how attractive correlations affect the spin-resolved transport properties of the system and find a nontrivial dependence of the conductance and tunnel magnetoresistance on the strength and sign of those correlations. We also demonstrate that attractive correlations can lead to an enhancement of the spin thermopower and the figure of merit, which can be controlled by a gate voltage.
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.
Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L
2008-06-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures
Peng, Chung-Kang; Goldberger, Ary L.
2016-01-01
Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763
Liu, Chen; Guan, Zhao; Xu, Qinzhu; Zhao, Lei; Song, Ying; Wang, Hui
2016-01-01
Abstract Fractures are common among aged people, and rapid assessment of the coagulation status is important. The thromboelastography (TEG) test can give a series of coagulation parameters and has been widely used in clinics. In this research, we looked at fracture patients over 60 and compared their TEG results with those of healthy controls. Since there is a paucity of studies comparing TEG assessments with conventional coagulation tests, we aim to clarify the relationship between TEG values and the values given by conventional coagulation tests. Forty fracture patients (27 femur and 13 humerus) over 60 years old were included in the study. The change in their coagulation status was evaluated by TEG before surgery within 4 hours after the fracture. Changes in TEG parameters were analyzed compared with controls. Conventional coagulation test results for the patients, including activated partial thromboplastin time (APTT), international normalized ratio (INR), fibrinogen, and platelets, were also acquired, and correlation analysis was done with TEG parameters, measuring similar aspects of the coagulation cascade. In addition, the sensitivity and specificity of TEG parameters for detecting raised fibrinogen levels were also analyzed. The K (time to 20 mm clot amplitude) and R (reaction time) values of aged fracture patients were lower than controls. The values for angle, maximal amplitude (MA), and coagulation index (CI) were raised compared with controls, indicating a hypercoagulable state. Correlation analysis showed that there were significant positive correlations between fibrinogen and MA/angle, between platelets and MA, and between APTT and R as well. There was significant negative correlation between fibrinogen and K. In addition, K values have better sensitivity and specificity for detecting elevated fibrinogen concentration than angle and MA values. Aged fracture patients tend to be in a hypercoagulable state, and this could be effectively reflected by a TEG test. There were correlations between TEG parameters and corresponding conventional tests. K values can better predict elevated fibrinogen levels in aged fracture patients. PMID:27311005
Liu, Chen; Guan, Zhao; Xu, Qinzhu; Zhao, Lei; Song, Ying; Wang, Hui
2016-06-01
Fractures are common among aged people, and rapid assessment of the coagulation status is important. The thromboelastography (TEG) test can give a series of coagulation parameters and has been widely used in clinics. In this research, we looked at fracture patients over 60 and compared their TEG results with those of healthy controls. Since there is a paucity of studies comparing TEG assessments with conventional coagulation tests, we aim to clarify the relationship between TEG values and the values given by conventional coagulation tests.Forty fracture patients (27 femur and 13 humerus) over 60 years old were included in the study. The change in their coagulation status was evaluated by TEG before surgery within 4 hours after the fracture. Changes in TEG parameters were analyzed compared with controls. Conventional coagulation test results for the patients, including activated partial thromboplastin time (APTT), international normalized ratio (INR), fibrinogen, and platelets, were also acquired, and correlation analysis was done with TEG parameters, measuring similar aspects of the coagulation cascade. In addition, the sensitivity and specificity of TEG parameters for detecting raised fibrinogen levels were also analyzed.The K (time to 20 mm clot amplitude) and R (reaction time) values of aged fracture patients were lower than controls. The values for angle, maximal amplitude (MA), and coagulation index (CI) were raised compared with controls, indicating a hypercoagulable state. Correlation analysis showed that there were significant positive correlations between fibrinogen and MA/angle, between platelets and MA, and between APTT and R as well. There was significant negative correlation between fibrinogen and K. In addition, K values have better sensitivity and specificity for detecting elevated fibrinogen concentration than angle and MA values.Aged fracture patients tend to be in a hypercoagulable state, and this could be effectively reflected by a TEG test. There were correlations between TEG parameters and corresponding conventional tests. K values can better predict elevated fibrinogen levels in aged fracture patients.
Study on biofilm-forming properties of clinical isolates of Staphylococcus aureus.
Taj, Yasmeen; Essa, Farhan; Aziz, Faisal; Kazmi, Shahana Urooj
2012-05-14
The purpose of this study was to observe the formation of biofilm, an important virulence factor, by isolates of Staphylococcus aureus (S. aureus) in Pakistan by different conventional methods and through electron microscopy. We screened 115 strains of S. aureus isolated from different clinical specimens by tube method (TM), air-liquid interface coverslip assay method, Congo red agar (CRA) method, and scanning electron microscopy (SEM). Out of 115 S. aureus isolates, 63 (54.78%) showed biofilm formation by tube method. Biofilm forming bacteria were further categorized as high producers (n = 23, 20%) and moderate producers (n = 40, 34.78%). TM coordinated well with the coverslip assay for strong biofilm-producing strains in 19 (16.5%) isolates. By coverslip method, weak producers were difficult to differentiate from biofilm negative isolates. Screening on CRA showed biofilm formation only in four (3.47%) strains. Scanning electron micrographs showed the biofilm-forming strains of S. aureus arranged in a matrix on the propylene surface and correlated well with the TM. Biofilm production is a marker of virulence for clinically relevant staphylococcal infections. It can be studied by various methods but screening on CRA is not recommended for investigation of biofilm formation in Staphylococcus aureus. Electron micrograph images correlate well with the biofilm production as observed by TM.
Alves, L.P.S.; Almeida, A.T.; Cruz, L.M.; Pedrosa, F.O.; de Souza, E.M.; Chubatsu, L.S.; Müller-Santos, M.; Valdameri, G.
2017-01-01
The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots. PMID:28099582
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
Typing of Haemophilus influenzae by Coagglutination and Conventional Slide Agglutination
Shively, Roxanne G.; Shigel, Janet T.; Peterson, Ellena M.; De La Maza, Luis M.
1981-01-01
Coagglutination was compared with conventional slide agglutination for the typing of 297 clinical isolates of Haemophilus sp. A 100% correlation was found with the H. influenzae type b isolates. Coagglutination showed no false-positive reactions with the nontypable strains of H. influenzae and H. parainfluenzae isolates; however, conventional slide agglutination exhibited many false-positive and non-interpretable reactions. PMID:6977555
Saitoh, Sei; Ohno, Nobuhiko; Saitoh, Yurika; Terada, Nobuo; Shimo, Satoshi; Aida, Kaoru; Fujii, Hideki; Kobayashi, Tetsuro; Ohno, Shinichi
2018-01-01
Combined analysis of immunostaining for various biological molecules coupled with investigations of ultrastructural features of individual cells is a powerful approach for studies of cellular functions in normal and pathological conditions. However, weak antigenicity of tissues fixed by conventional methods poses a problem for immunoassays. This study introduces a method of correlative light and electron microscopy imaging of the same endocrine cells of compact and diffuse islets from human pancreatic tissue specimens. The method utilizes serial sections obtained from Epon-embedded specimens fixed with glutaraldehyde and osmium tetroxide. Double-immunofluorescence staining of thick Epon sections for endocrine hormones (insulin and glucagon) and regenerating islet-derived gene 1 α (REG1α) was performed following the removal of Epoxy resin with sodium ethoxide, antigen retrieval by autoclaving, and de-osmification treatment with hydrogen peroxide. The immunofluorescence images of endocrine cells were superimposed with the electron microscopy images of the same cells obtained from serial ultrathin sections. Immunofluorescence images showed well-preserved secretory granules in endocrine cells, whereas electron microscopy observations demonstrated corresponding secretory granules and intracellular organelles in the same cells. In conclusion, the correlative imaging approach developed by us may be useful for examining ultrastructural features in combination with immunolocalisation of endocrine hormones in the same human pancreatic islets. PMID:29622846
Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro
2013-02-01
The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.
Zhong, Z W; Wu, R G; Wang, Z P; Tan, H L
2015-09-01
Conventional microfluidic devices are typically complex and expensive. The devices require the use of pneumatic control systems or highly precise pumps to control the flow in the devices. This work investigates an alternative method using paper based microfluidic devices to replace conventional microfluidic devices. Size based separation and extraction experiments conducted were able to separate free dye from a mixed protein and dye solution. Experimental results showed that pure fluorescein isothiocyanate could be separated from a solution of mixed fluorescein isothiocyanate and fluorescein isothiocyanate labeled bovine serum albumin. The analysis readings obtained from a spectrophotometer clearly show that the extracted tartrazine sample did not contain any amount of Blue-BSA, because its absorbance value was 0.000 measured at a wavelength of 590nm, which correlated to Blue-BSA. These demonstrate that paper based microfluidic devices, which are inexpensive and easy to implement, can potentially replace their conventional counterparts by the use of simple geometry designs and the capillary action. These findings will potentially help in future developments of paper based microfluidic devices. Copyright © 2015 Elsevier B.V. All rights reserved.
Stiller, Wolfram; Skornitzke, Stephan; Fritz, Franziska; Klauss, Miriam; Hansen, Jens; Pahn, Gregor; Grenacher, Lars; Kauczor, Hans-Ulrich
2015-10-01
Study objectives were the quantitative evaluation of whether conventional abdominal computed tomography (CT) perfusion measurements mathematically correlate with quantitative single-acquisition dual-energy CT (DECT) iodine concentration maps, the determination of the optimum time of acquisition for achieving maximum correlation, and the estimation of the potential for radiation exposure reduction when replacing conventional CT perfusion by single-acquisition DECT iodine concentration maps. Dual-energy CT perfusion sequences were dynamically acquired over 51 seconds (34 acquisitions every 1.5 seconds) in 24 patients with histologically verified pancreatic carcinoma using dual-source DECT at tube potentials of 80 kVp and 140 kVp. Using software developed in-house, perfusion maps were calculated from 80-kVp image series using the maximum slope model after deformable motion correction. In addition, quantitative iodine maps were calculated for each of the 34 DECT acquisitions per patient. Within a manual segmentation of the pancreas, voxel-by-voxel correlation between the perfusion map and each of the iodine maps was calculated for each patient to determine the optimum time of acquisition topt defined as the acquisition time of the iodine map with the highest correlation coefficient. Subsequently, regions of interest were placed inside the tumor and inside healthy pancreatic tissue, and correlation between mean perfusion values and mean iodine concentrations within these regions of interest at topt was calculated for the patient sample. The mean (SD) topt was 31.7 (5.4) seconds after the start of contrast agent injection. The mean (SD) perfusion values for healthy pancreatic and tumor tissues were 67.8 (26.7) mL per 100 mL/min and 43.7 (32.2) mL per 100 mL/min, respectively. At topt, the mean (SD) iodine concentrations were 2.07 (0.71) mg/mL in healthy pancreatic and 1.69 (0.98) mg/mL in tumor tissue, respectively. Overall, the correlation between perfusion values and iodine concentrations was high (0.77), with correlation of 0.89 in tumor and of 0.56 in healthy pancreatic tissue at topt. Comparing radiation exposure associated with a single DECT acquisition at topt (0.18 mSv) to that of an 80 kVp CT perfusion sequence (2.96 mSv) indicates that an average reduction of Deff by 94% could be achieved by replacing conventional CT perfusion with a single-acquisition DECT iodine concentration map. Quantitative iodine concentration maps obtained with DECT correlate well with conventional abdominal CT perfusion measurements, suggesting that quantitative iodine maps calculated from a single DECT acquisition at an organ-specific and patient-specific optimum time of acquisition might be able to replace conventional abdominal CT perfusion measurements if the time of acquisition is carefully calibrated. This could lead to large reductions of radiation exposure to the patients while offering quantitative perfusion data for diagnosis.
Prevalence and Correlates of E-Cigarette Perceptions and Trial Among Early Adolescents in Mexico.
Thrasher, James F; Abad-Vivero, Erika N; Barrientos-Gutíerrez, Inti; Pérez-Hernández, Rosaura; Reynales-Shigematsu, Luz Miriam; Mejía, Raúl; Arillo-Santillán, Edna; Hernández-Ávila, Mauricio; Sargent, James D
2016-03-01
Assess the prevalence and correlates of e-cigarette perceptions and trial among adolescents in Mexico, where e-cigarettes are banned. Cross-sectional data were collected in 2015 from a representative sample of middle-school students (n = 10,146). Prevalence of e-cigarette awareness, relative harm, and trial were estimated, adjusting for sampling weights and school-level clustering. Multilevel logistic regression models adjusted for school-level clustering to assess correlates of e-cigarette awareness and trial. Finally, students who had tried only e-cigarettes were compared with students who had tried: (1) conventional cigarettes only; (2) both e-cigarettes and conventional cigarettes (dual triers); and (3) neither cigarette type (never triers). Fifty-one percent of students had heard about e-cigarettes, 19% believed e-cigarettes were less harmful than conventional cigarettes, and 10% had tried them. Independent correlates of e-cigarette awareness and trial included established risk factors for smoking, as well as technophilia (i.e., use of more media technologies) and greater Internet tobacco advertising exposure. Exclusive e-cigarette triers (4%) had significantly higher technophilia, bedroom Internet access, and Internet tobacco advertising exposure compared to conventional cigarette triers (19%) and never triers (71%) but not compared to dual triers (6%), although dual triers had significantly stronger conventional cigarette risk factors. This study suggests that adolescent e-cigarette awareness and use is high in Mexico, in spite of its e-cigarette ban. A significant number of medium-risk youth have tried e-cigarettes only, suggesting that e-cigarettes could lead to more intensive substance use. Strategies to reduce e-cigarette use should consider reducing exposures to Internet marketing. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Shahinfar, Saleh; Mehrabani-Yeganeh, Hassan; Lucas, Caro; Kalhor, Ahmad; Kazemian, Majid; Weigel, Kent A.
2012-01-01
Developing machine learning and soft computing techniques has provided many opportunities for researchers to establish new analytical methods in different areas of science. The objective of this study is to investigate the potential of two types of intelligent learning methods, artificial neural networks and neuro-fuzzy systems, in order to estimate breeding values (EBV) of Iranian dairy cattle. Initially, the breeding values of lactating Holstein cows for milk and fat yield were estimated using conventional best linear unbiased prediction (BLUP) with an animal model. Once that was established, a multilayer perceptron was used to build ANN to predict breeding values from the performance data of selection candidates. Subsequently, fuzzy logic was used to form an NFS, a hybrid intelligent system that was implemented via a local linear model tree algorithm. For milk yield the correlations between EBV and EBV predicted by the ANN and NFS were 0.92 and 0.93, respectively. Corresponding correlations for fat yield were 0.93 and 0.93, respectively. Correlations between multitrait predictions of EBVs for milk and fat yield when predicted simultaneously by ANN were 0.93 and 0.93, respectively, whereas corresponding correlations with reference EBV for multitrait NFS were 0.94 and 0.95, respectively, for milk and fat production. PMID:22991575
NASA Astrophysics Data System (ADS)
Lardner, Timothy; Li, Minghui; Gachagan, Anthony
2014-02-01
Materials with a coarse grain structure are becoming increasingly prevalent in industry due to their resilience to stress and corrosion. These materials are difficult to inspect with ultrasound because reflections from the grains lead to high noise levels which hinder the echoes of interest. Spatially Averaged Sub-Aperture Correlation Imaging (SASACI) is an advanced array beamforming technique that uses the cross-correlation between images from array sub-apertures to generate an image weighting matrix, in order to reduce noise levels. This paper presents a method inspired by SASACI to further improve imaging using phase information to refine focusing and reduce noise. A-scans from adjacent array elements are cross-correlated using both signal amplitude and phase to refine delay laws and minimize phase aberration. The phase-based and amplitude-based corrected images are used as inputs to a two-dimensional cross-correlation algorithm that will output a weighting matrix that can be applied to any conventional image. This approach was validated experimentally using a 5MHz array a coarse grained Inconel 625 step wedge, and compared to the Total Focusing Method (TFM). Initial results have seen SNR improvements of over 20dB compared to TFM, and a resolution that is much higher.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J. Grant, E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu; Peterson, Kirk A., E-mail: grant.hill@sheffield.ac.uk, E-mail: kipeters@wsu.edu
New correlation consistent basis sets, cc-pVnZ-PP-F12 (n = D, T, Q), for all the post-d main group elements Ga–Rn have been optimized for use in explicitly correlated F12 calculations. The new sets, which include not only orbital basis sets but also the matching auxiliary sets required for density fitting both conventional and F12 integrals, are designed for correlation of valence sp, as well as the outer-core d electrons. The basis sets are constructed for use with the previously published small-core relativistic pseudopotentials of the Stuttgart-Cologne variety. Benchmark explicitly correlated coupled-cluster singles and doubles with perturbative triples [CCSD(T)-F12b] calculations of themore » spectroscopic properties of numerous diatomic molecules involving 4p, 5p, and 6p elements have been carried out and compared to the analogous conventional CCSD(T) results. In general the F12 results obtained with a n-zeta F12 basis set were comparable to conventional aug-cc-pVxZ-PP or aug-cc-pwCVxZ-PP basis set calculations obtained with x = n + 1 or even x = n + 2. The new sets used in CCSD(T)-F12b calculations are particularly efficient at accurately recovering the large correlation effects of the outer-core d electrons.« less
Sonomura, Takahiro; Furuta, Takahiro; Nakatani, Ikuko; Yamamoto, Yo; Honma, Satoru; Kaneko, Takeshi
2014-11-01
Ten years have passed since a serial block-face scanning electron microscopy (SBF-SEM) method was developed [1]. In this innovative method, samples were automatically sectioned with an ultramicrotome placed inside a scanning electron microscope column, and the block surfaces were imaged one after another by SEM to capture back-scattered electrons. The contrast-inverted images obtained by the SBF-SEM were very similar to those acquired using conventional TEM. SFB-SEM has made easy to acquire image stacks of the transmission electron microscopy (TEM) in the mesoscale, which is taken with the confocal laser-scanning microcopy(CF-LSM).Furthermore, serial-section SEM has been combined with the focused ion beam (FIB) milling method [2]. FIB-incorporated SEM (FIB-SEM) has enabled the acquisition of three-dimensional images with a higher z-axis resolution com- pared to ultramicrotome-equipped SEM.We tried immunocytochemistry for FIB-SEM and correlated this immunoreactivity with that in CF-LSM. Dendrites of neurons in the rat neostriatum were visualized using a recombinant viral vector. Moreover, the thalamostriatal afferent terminals were immunolabeled with Cy5 fluorescence for vesicular glutamate transporter 2 (VGluT2). After detection of the sites of terminals apposed to the dendrites by using CF-LSM, GFP and VGluT2 immunoreactivities were further developed for EM by using immunogold/silver enhancement and immunoperoxidase/diaminobenzidine (DAB) methods, respectively.We showed that conventional immuno-cytochemical staining for TEM was applicable to FIB-SEM. Furthermore, several synaptic contacts, which were thought to exist on the basis of CF-LSM findings, were confirmed with FIB-SEM, revealing the usefulness of the combined method of CF-LSM and FIB-SEM. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Oliveira, Laís Rani Sales; Braga, Stella Sueli Lourenço; Bicalho, Aline Arêdes; Ribeiro, Maria Tereza Hordones; Price, Richard Bengt; Soares, Carlos José
2018-07-01
To describe a method of measuring the molar cusp deformation using micro-computed tomography (micro-CT), the propagation of enamel cracks using transillumination, and the effects of hygroscopic expansion after incremental and bulk-filling resin composite restorations. Twenty human molars received standardized Class II mesio-occlusal-distal cavity preparations. They were restored with either a bulk-fill resin composite, X-tra fil (XTRA), or a conventional resin composite, Filtek Z100 (Z100). The resin composites were tested for post-gel shrinkage using a strain gauge method. Cusp deformation (CD) was evaluated using the images obtained using a micro-CT protocol and using a strain-gauge method. Enamel cracks were detected using transillumination. The post-gel shrinkage of Z100 was higher than XTRA (P < 0.001). The amount of cusp deformation produced using Z100 was higher compared to XTRA, irrespective of the measurement method used (P < 0.001). The thinner lingual cusp always had a higher CD than the buccal cusp, irrespective of the measurement method (P < 0.001). A positive correlation (r = 0.78) was found between cusp deformation measured by micro-CT or by the strain-gauge method. After hygroscopic expansion of the resin composite, the cusp displacement recovered around 85% (P < 0.001). After restoration, Z100 produced more cracks than XTRA (P = 0.012). Micro-CT was an effective method for evaluating the cusp deformation. Transillumination was effective for detecting enamel cracks. There were fewer negative effects of polymerization shrinkage in bulk-fill resin restorations using XTRA than for the conventional incremental filling technique using conventional composite resin Z100. Shrinkage and cusp deformation are directly related to the formation of enamel cracks. Cusp deformation and crack propagation may increase the risk of tooth fracture. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rosenthal, Rachel; Hamel, Christian; Oertli, Daniel; Demartines, Nicolas; Gantert, Walter A
2010-08-01
The aim of the present study was to investigate whether trainees' performance on a virtual reality angled laparoscope navigation task correlates with scores obtained on a validated conventional test of spatial ability. 56 participants of a surgery workshop performed an angled laparoscope navigation task on the Xitact LS 500 virtual reality Simulator. Performance parameters were correlated with the score of a validated paper-and-pencil test of spatial ability. Performance at the conventional spatial ability test significantly correlated with performance at the virtual reality task for overall task score (p < 0.001), task completion time (p < 0.001) and economy of movement (p = 0.035), not for endoscope travel speed (p = 0.947). In conclusion, trainees' performance in a standardized virtual reality camera navigation task correlates with their innate spatial ability. This VR session holds potential to serve as an assessment tool for trainees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M.; Ma, L.Q.
1998-11-01
It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less
Diagnosing cysts with correlation coefficient images from 2-dimensional freehand elastography.
Booi, Rebecca C; Carson, Paul L; O'Donnell, Matthew; Richards, Michael S; Rubin, Jonathan M
2007-09-01
We compared the diagnostic potential of using correlation coefficient images versus elastograms from 2-dimensional (2D) freehand elastography to characterize breast cysts. In this preliminary study, which was approved by the Institutional Review Board and compliant with the Health Insurance Portability and Accountability Act, we imaged 4 consecutive human subjects (4 cysts, 1 biopsy-verified benign breast parenchyma) with freehand 2D elastography. Data were processed offline with conventional 2D phase-sensitive speckle-tracking algorithms. The correlation coefficient in the cyst and surrounding tissue was calculated, and appearances of the cysts in the correlation coefficient images and elastograms were compared. The correlation coefficient in the cysts was considerably lower (14%-37%) than in the surrounding tissue because of the lack of sufficient speckle in the cysts, as well as the prominence of random noise, reverberations, and clutter, which decorrelated quickly. Thus, the cysts were visible in all correlation coefficient images. In contrast, the elastograms associated with these cysts each had different elastographic patterns. The solid mass in this study did not have the same high decorrelation rate as the cysts, having a correlation coefficient only 2.1% lower than that of surrounding tissue. Correlation coefficient images may produce a more direct, reliable, and consistent method for characterizing cysts than elastograms.
Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, G.; Chopra, A.K.; Severson, C.D.
1997-12-01
Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less
NASA Astrophysics Data System (ADS)
Cox, Courtney E.; Phifer, Jeremy R.; Ferreira da Silva, Larissa; Gonçalves Nogueira, Gabriel; Ley, Ryan T.; O'Loughlin, Elizabeth J.; Pereira Barbosa, Ana Karolyne; Rygelski, Brett T.; Paluch, Andrew S.
2017-02-01
Solubility parameter based methods have long been a valuable tool for solvent formulation and selection. Of these methods, the MOdified Separation of Cohesive Energy Density (MOSCED) has recently been shown to correlate well the equilibrium solubility of multifunctional non-electrolyte solids. However, before it can be applied to a novel solute, a limited amount of reference solubility data is required to regress the necessary MOSCED parameters. Here we demonstrate for the solutes methylparaben, ethylparaben, propylparaben, butylparaben, lidocaine and ephedrine how conventional molecular simulation free energy calculations or electronic structure calculations in a continuum solvent, here the SMD or SM8 solvation model, can instead be used to generate the necessary reference data, resulting in a predictive flavor of MOSCED. Adopting the melting point temperature and enthalpy of fusion of these compounds from experiment, we are able to predict equilibrium solubilities. We find the method is able to well correlate the (mole fraction) equilibrium solubility in non-aqueous solvents over four orders of magnitude with good quantitative agreement.
Cox, Courtney E; Phifer, Jeremy R; Ferreira da Silva, Larissa; Gonçalves Nogueira, Gabriel; Ley, Ryan T; O'Loughlin, Elizabeth J; Pereira Barbosa, Ana Karolyne; Rygelski, Brett T; Paluch, Andrew S
2017-02-01
Solubility parameter based methods have long been a valuable tool for solvent formulation and selection. Of these methods, the MOdified Separation of Cohesive Energy Density (MOSCED) has recently been shown to correlate well the equilibrium solubility of multifunctional non-electrolyte solids. However, before it can be applied to a novel solute, a limited amount of reference solubility data is required to regress the necessary MOSCED parameters. Here we demonstrate for the solutes methylparaben, ethylparaben, propylparaben, butylparaben, lidocaine and ephedrine how conventional molecular simulation free energy calculations or electronic structure calculations in a continuum solvent, here the SMD or SM8 solvation model, can instead be used to generate the necessary reference data, resulting in a predictive flavor of MOSCED. Adopting the melting point temperature and enthalpy of fusion of these compounds from experiment, we are able to predict equilibrium solubilities. We find the method is able to well correlate the (mole fraction) equilibrium solubility in non-aqueous solvents over four orders of magnitude with good quantitative agreement.
NASA Astrophysics Data System (ADS)
Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi
2018-06-01
Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, So-Yeon; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul 110-744; Biomedical Research Institute, Seoul National University College of Medicine, Seoul 110-744
Purpose: Texture analysis on fluence maps was performed to evaluate the degree of modulation for volumetric modulated arc therapy (VMAT) plans. Methods: A total of six textural features including angular second moment, inverse difference moment, contrast, variance, correlation, and entropy were calculated for fluence maps generated from 20 prostate and 20 head and neck VMAT plans. For each of the textural features, particular displacement distances (d) of 1, 5, and 10 were adopted. To investigate the deliverability of each VMAT plan, gamma passing rates of pretreatment quality assurance, and differences in modulating parameters such as multileaf collimator (MLC) positions, gantrymore » angles, and monitor units at each control point between VMAT plans and dynamic log files registered by the Linac control system during delivery were acquired. Furthermore, differences between the original VMAT plan and the plan reconstructed from the dynamic log files were also investigated. To test the performance of the textural features as indicators for the modulation degree of VMAT plans, Spearman’s rank correlation coefficients (r{sub s}) with the plan deliverability were calculated. For comparison purposes, conventional modulation indices for VMAT including the modulation complexity score for VMAT, leaf travel modulation complexity score, and modulation index supporting station parameter optimized radiation therapy (MI{sub SPORT}) were calculated, and their correlations were analyzed in the same way. Results: There was no particular textural feature which always showed superior correlations with every type of plan deliverability. Considering the results comprehensively, contrast (d = 1) and variance (d = 1) generally showed considerable correlations with every type of plan deliverability. These textural features always showed higher correlations to the plan deliverability than did the conventional modulation indices, except in the case of modulating parameter differences. The r{sub s} values of contrast to the global gamma passing rates with criteria of 2%/2 mm, 2%/1 mm, and 1%/2 mm were 0.536, 0.473, and 0.718, respectively. The respective values for variance were 0.551, 0.481, and 0.688. In the case of local gamma passing rates, the r{sub s} values of contrast were 0.547, 0.578, and 0.620, respectively, and those of variance were 0.519, 0.527, and 0.569. All of the r{sub s} values in those cases were statistically significant (p < 0.003). In the cases of global and local gamma passing rates, MI{sub SPORT} showed the highest correlations among the conventional modulation indices. For global passing rates, r{sub s} values of MI{sub SPORT} were −0.420, −0.330, and −0.632, respectively, and those for local passing rates were −0.455, −0.490 and −0.502. The values of r{sub s} of contrast, variance, and MI{sub SPORT} with the MLC errors were −0.863, −0.828, and 0.795, respectively, all with statistical significances (p < 0.001). The correlations with statistical significances between variance and dose-volumetric differences were observed more frequently than the others. Conclusions: The contrast (d = 1) and variance (d = 1) calculated from fluence maps of VMAT plans showed considerable correlations with the plan deliverability, indicating their potential use as indicators for assessing the degree of modulation of VMAT plans. Both contrast and variance consistently showed better performance than the conventional modulation indices for VMAT.« less
Mass density images from the diffraction enhanced imaging technique.
Hasnah, M O; Parham, C; Pisano, E D; Zhong, Z; Oltulu, O; Chapman, D
2005-02-01
Conventional x-ray radiography measures the projected x-ray attenuation of an object. It requires attenuation differences to obtain contrast of embedded features. In general, the best absorption contrast is obtained at x-ray energies where the absorption is high, meaning a high absorbed dose. Diffraction-enhanced imaging (DEI) derives contrast from absorption, refraction, and extinction. The refraction angle image of DEI visualizes the spatial gradient of the projected electron density of the object. The projected electron density often correlates well with the projected mass density and projected absorption in soft-tissue imaging, yet the mass density is not an "energy"-dependent property of the object, as is the case of absorption. This simple difference can lead to imaging with less x-ray exposure or dose. In addition, the mass density image can be directly compared (i.e., a signal-to-noise comparison) with conventional radiography. We present the method of obtaining the mass density image, the results of experiments in which comparisons are made with radiography, and an application of the method to breast cancer imaging.
van Gemert-Schriks, M C M
2007-05-01
Although Atraumatic Restorative Treatment (ART) claims to be a patient-friendly method of treatment, little scientific proof of this is available. The aim of this study, therefore, was to acquire a reliable measurement of the degree of discomfort which children experience during dental treatment performed according to the ART approach and during the conventional method. A number of 403 Indonesian schoolchildren were randomly divided into 2 groups. In each child, one class II restoration was carried out on a deciduous molar either by means of ART or the use of rotary instruments (750 rpm). Discomfort scores were determined both by physiological measurements (heart rate) and behavioral observations (Venham scale). Venham scores showed a marked difference between the 2 groups, whereas heart rate scores only differed significantly during deep excavation. A correlation was found between Venham scores and heart rate measurements. Sex, initial anxiety and performing dentist were shown to be confounding variables. In conclusion it can be said that children treated according to the ART approach experience less discomfort than those treated with rotary instruments.
Lin, Chun-I; Lee, Yung-Chun
2014-08-01
Line-focused PVDF transducers and defocusing measurement method are applied in this work to determine the dispersion curve of the Rayleigh-like surface waves propagating along the circumferential direction of a solid cylinder. Conventional waveform processing method has been modified to cope with the non-linear relationship between phase angle of wave interference and defocusing distance induced by a cylindrically curved surface. A cross correlation method is proposed to accurately extract the cylindrical Rayleigh wave velocity from measured data. Experiments have been carried out on one stainless steel and one glass cylinders. The experimentally obtained dispersion curves are in very good agreement with their theoretical counterparts. Variation of cylindrical Rayleigh wave velocity due to the cylindrical curvature is quantitatively verified using this new method. Other potential applications of this measurement method for cylindrical samples will be addressed. Copyright © 2014 Elsevier B.V. All rights reserved.
Flow-gated radial phase-contrast imaging in the presence of weak flow.
Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen
2013-01-01
To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.
Zhao, W; Busto, R; Truettner, J; Ginsberg, M D
2001-07-30
The analysis of pixel-based relationships between local cerebral blood flow (LCBF) and mRNA expression can reveal important insights into brain function. Traditionally, LCBF and in situ hybridization studies for genes of interest have been analyzed in separate series. To overcome this limitation and to increase the power of statistical analysis, this study focused on developing a double-label method to measure local cerebral blood flow (LCBF) and gene expressions simultaneously by means of a dual-autoradiography procedure. A 14C-iodoantipyrine autoradiographic LCBF study was first performed. Serial brain sections (12 in this study) were obtained at multiple coronal levels and were processed in the conventional manner to yield quantitative LCBF images. Two replicate sections at each bregma level were then used for in situ hybridization. To eliminate the 14C-iodoantipyrine from these sections, a chloroform-washout procedure was first performed. The sections were then processed for in situ hybridization autoradiography for the probes of interest. This method was tested in Wistar rats subjected to 12 min of global forebrain ischemia by two-vessel occlusion plus hypotension, followed by 2 or 6 h of reperfusion (n=4-6 per group). LCBF and in situ hybridization images for heat shock protein 70 (HSP70) were generated for each rat, aligned by disparity analysis, and analyzed on a pixel-by-pixel basis. This method yielded detailed inter-modality correlation between LCBF and HSP70 mRNA expressions. The advantages of this method include reducing the number of experimental animals by one-half; and providing accurate pixel-based correlations between different modalities in the same animals, thus enabling paired statistical analyses. This method can be extended to permit correlation of LCBF with the expression of multiple genes of interest.
Study on diagnosis of micro-biomechanical structure using optical coherence tomography
NASA Astrophysics Data System (ADS)
Saeki, Souichi; Hashimoto, Youhei; Saito, Takashi; Hiro, Takafumi; Matsuzaki, Masunori
2007-02-01
Acute coronary syndromes, e.g. myocardial infarctions, are caused by the rupture of unstable plaques on coronary arteries. The stability of plaque, which depends on biomechanical properties of fibrous cap, should be diagnosed crucially. Recently, Optical Coherence Tomography (OCT) has been developed as a cross-sectional imaging method of microstructural biological tissue with high resolution 1~10 μm. Multi-functional OCT system has been promising, e.g. an estimator of biomechanical characteristics. It has been, however, difficult to estimate biomechanical characteristics, because OCT images have just speckle patterns by back-scattering light from tissue. In this study, presented is Optical Coherence Straingraphy (OCS) on the basis of OCT system, which can diagnose tissue strain distribution. This is basically composed of Recursive Cross-correlation technique (RC), which can provide a displacement vector distribution with high resolution. Furthermore, Adjacent Cross-correlation Multiplication (ACM) is introduced as a speckle noise reduction method. Multiplying adjacent correlation maps can eliminate anomalies from speckle noise, and then can enhance S/N in the determination of maximum correlation coefficient. Error propagation also can be further prevented by introducing to the recursive algorithm (RC). In addition, the spatial vector interpolation by local least square method is introduced to remove erroneous vectors and smooth the vector distribution. This was numerically applied to compressed elastic heterogeneous tissue samples to carry out the accuracy verifications. Consequently, it was quantitatively confirmed that its accuracy of displacement vectors and strain matrix components could be enhanced, comparing with the conventional method. Therefore, the proposed method was validated by the identification of different elastic objects with having nearly high resolution for that defined by optical system.
Le, S H; Tonami, K; Umemori, S; Nguyen, L T-B; Ngo, L T-Q; Mataki, S
2018-06-01
An objective method to recognize patient psychology using heart rate variability (HRV) has recently been developed and is increasingly being used in medical practice. This study compared the potential of this new method with the use of conventional surveys measuring anxiety levels in patients undergoing impacted third molar (ITM) surgery. Patient anxiety was examined before treatment in 64 adults who required ITM surgery, using two methods: measurement of HRV and conventional questionnaire surveys (state section of the State-Trait Anxiety Inventory (STAI-S) and Dental Fear Survey (DFS)). Both methods were assessed for their respective abilities to determine the impact of personal background, the amount of information provided, and the surgical procedure on patient psychology. Questionnaires and HRV yielded the same finding: dental experience was the single background factor that correlated with patient anxiety; the other factors remain unclear. The STAI-S showed a significant relationship between the information provided to the patient and their anxiety level, while the DFS and HRV did not. In addition, HRV demonstrated its ability to assess the effects of the surgical procedure on patient psychology. HRV demonstrated great potential as an objective method for evaluating patient stress, especially for providing real-time information on the patient's status. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Momose, Haruka; Mizukami, Takuo; Kuramitsu, Madoka; Takizawa, Kazuya; Masumi, Atsuko; Araki, Kumiko; Furuhata, Keiko; Yamaguchi, Kazunari; Hamaguchi, Isao
2015-01-01
We have previously identified 17 biomarker genes which were upregulated by whole virion influenza vaccines, and reported that gene expression profiles of these biomarker genes had a good correlation with conventional animal safety tests checking body weight and leukocyte counts. In this study, we have shown that conventional animal tests showed varied and no dose-dependent results in serially diluted bulk materials of influenza HA vaccines. In contrast, dose dependency was clearly shown in the expression profiles of biomarker genes, demonstrating higher sensitivity of gene expression analysis than the current animal safety tests of influenza vaccines. The introduction of branched DNA based-concurrent expression analysis could simplify the complexity of multiple gene expression approach, and could shorten the test period from 7 days to 3 days. Furthermore, upregulation of 10 genes, Zbp1, Mx2, Irf7, Lgals9, Ifi47, Tapbp, Timp1, Trafd1, Psmb9, and Tap2, was seen upon virosomal-adjuvanted vaccine treatment, indicating that these biomarkers could be useful for the safety control of virosomal-adjuvanted vaccines. In summary, profiling biomarker gene expression could be a useful, rapid, and highly sensitive method of animal safety testing compared with conventional methods, and could be used to evaluate the safety of various types of influenza vaccines, including adjuvanted vaccine. PMID:25909814
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set
NASA Astrophysics Data System (ADS)
Bennie, Simon J.; Stella, Martina; Miller, Thomas F.; Manby, Frederick R.
2015-07-01
Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O ( N 0 ) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.
Corîci, Oana Maria; Tănasie, Cornelia Andreea; Alexandru, Dragoş Ovidiu; Florescu, Mihaela Corina; Comănescu, Maria Victoria; Kamal, Kamal Constantin; Ţenea-Cojan, Tiberiu Ştefăniţă; Iancău, Maria; Dinescu, Sorin Nicolae
2018-01-01
To assess left ventricular (LV) systolic function and morphology in patients with severe dilated cardiomyopathy (DCM), using both conventional and a complex technique, speckle-tracking echocardiography, and evaluate the correlation between pre-ejection period and left ventricular ejection period (PEP/LVET) ratio, global longitudinal strain (GLS), and severity of the condition. Seventeen patients were enrolled after rigorous criteria. Echocardiography was performed in conventional and speckle-tracking mode, in all patients with DCM, in sinus rhythm. LV dimensions, volumes and ejection fraction (LVEF) were measured. PEP/LVET ratio was obtained from apical 5-chamber axis and was defined as the time between QRS onset and LV ejection reported to LV ejection period. Speckle-tracking imaging was performed in offline mode and GLS was obtained from parasternal 4-, 3-, 2-chamber apical view, by averaging longitudinal peak systolic strain of all 17 LV-segments. New York Heart Association (NYHA) functional class correlated significantly with LVEF (-0.82; p=0.0006), PEP/LVET (0.86; p=0.001) or GLS (0.85; p=0.0002). Considerable correlations were between mitral regurgitation (MR) severity and LVEF (-0.65; p=0.01) or PEP/LVET (0.69; p=0.0059), but higher were between MR severity and GLS (0.76; p=0.0018). Tricuspid regurgitation (TR) grading correlated statistically with LVEF (-0.62; p=0.01), PEP/LVET and GLS (0.6; p=0.018; and 0.62; p=0.014, respectively). As opposed to the parameters in conventional echocardiography, GLS correlated with DCM etiology (p=0.0046) and with the gender (p=0.048). This study demonstrates that, in patients with DCM, assessment of cardiac dyssynchrony can be accurately accomplished by combining parameters in conventional and in speckle-tracking echocardiography.
Refinement procedure for the image alignment in high-resolution electron tomography.
Houben, L; Bar Sadan, M
2011-01-01
High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.
Hydrothermal growth of ZnO nanowire arrays: fine tuning by precursor supersaturation
Yan, Danhua; Cen, Jiajie; Zhang, Wenrui; ...
2016-12-20
In this paper, we develop a technique that fine tunes the hydrothermal growth of ZnO nanowires to address the difficulties in controlling their growth in a conventional one-pot hydrothermal method. In our technique, precursors are separately and slowly supplied with the assistance of a syringe pump, through the entire course of the growth. Compared to the one-pot method, the significantly lowered supersaturation of precursors helps eliminating competitive homogeneous nucleation and improves the reproducibility. The supersaturation degree can be readily tuned by the precursor quantity and injection rate, thus forming ZnO nanowire arrays of various geometries and packing densities in amore » highly controllable fashion. The precise control of ZnO nanowire growth enables systematic studies on the correlation between the material's properties and its morphology. Finally, in this work, ZnO nanowire arrays of various morphologies are studied as photoelectrochemical (PEC) water splitting photoanodes, in which we establish clear correlations between the water splitting performance and the nanowires' size, shape, and packing density.« less
Hack, Erwin; Gundu, Phanindra Narayan; Rastogi, Pramod
2005-05-10
An innovative technique for reducing speckle noise and improving the intensity profile of the speckle correlation fringes is presented. The method is based on reducing the range of the modulation intensity values of the speckle interference pattern. After the fringe pattern is corrected adaptively at each pixel, a simple morphological filtering of the fringes is sufficient to obtain smoothed fringes. The concept is presented both analytically and by simulation by using computer-generated speckle patterns. The experimental verification is performed by using an amplitude-only spatial light modulator (SLM) in a conventional electronic speckle pattern interferometry setup. The optical arrangement for tuning a commercially available LCD array for amplitude-only behavior is described. The method of feedback to the LCD SLM to modulate the intensity of the reference beam in order to reduce the modulation intensity values is explained, and the resulting fringe pattern and increase in the signal-to-noise ratio are discussed.
Heat transfer correlations for kerosene fuels and mixtures and physical properties for Jet A fuel
NASA Technical Reports Server (NTRS)
Ackerman, G. H.; Faith, L. E.
1972-01-01
Heat transfer correlations are reported for conventional Jet A fuel for both laminar and turbulent flow in circular tubes. Correlations were developed for cooling in turbine engines, but have broader applications in petroleum and chemical processing, and other industrial applications.
Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces
NASA Astrophysics Data System (ADS)
Qian, Guo-Xin
This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.
Predicting arsenic bioavailability to hyperaccumulator Pteris vittata in arsenic-contaminated soils.
Gonzaga, Maria Isidória Silva; Ma, Lena Q; Pacheco, Edson Patto; dos Santos, Wallace Melo
2012-12-01
Using chemical extraction to evaluate plant arsenic availability in contaminated soils is important to estimate the time frame for site cleanup during phytoremediation. It is also of great value to assess As mobility in soil and its risk in environmental contamination. In this study, four conventional chemical extraction methods (water, ammonium sulfate, ammonium phosphate, and Mehlich III) and a new root-exudate based method were used to evaluate As extractability and to correlate it with As accumulation in P. vittata growing in five As-contaminated soils under greenhouse condition. The relationship between different soil properties, and As extractability and plant As accumulation was also investigated. Arsenic extractability was 4.6%, 7.0%, 18%, 21%, and 46% for water, ammonium sulfate, organic acids, ammonium phosphate, and Mehlich III, respectively. Root exudate (organic acids) solution was suitable for assessing As bioavailability (81%) in the soils while Mehlich III (31%) overestimated the amount of As taken up by plants. Soil organic matter, P and Mg concentrations were positively correlated to plant As accumulation whereas Ca concentration was negatively correlated. Further investigation is needed on the effect of Ca and Mg on As uptake by P. vittata. Moreover, additional As contaminated soils with different properties should be tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mangum, John S.; Chan, Lisa H.; Schmidt, Ute
Site-specific preparation of specimens using focused ion beam instruments for transmission electron microscopy is at the forefront of targeting regions of interest for nanoscale characterization. Typical methods of pinpointing desired features include electron backscatter diffraction for differentiating crystal structures and energy-dispersive X-Ray spectroscopy for probing compositional variations. Yet there are situations, notably in the titanium dioxide system, where these techniques can fail. Differentiating between the brookite and anatase polymorphs of titania is either excessively laborious or impossible with the aforementioned techniques. However, due to differences in bonding structure, Raman spectroscopy serves as an ideal candidate for polymorph differentiation. In thismore » work, a correlative approach utilizing Raman spectroscopy for targeted focused ion beam specimen preparation was employed. Dark field imaging and diffraction in the transmission electron microscope confirmed the region of interest located via Raman spectroscopy and demonstrated the validity of this new method. Correlative Raman spectroscopy, scanning electron microscopy, and focused ion beam is shown to be a promising new technique for identifying site-specific preparation of nanoscale specimens in cases where conventional approaches do not suffice.« less
Mangum, John S; Chan, Lisa H; Schmidt, Ute; Garten, Lauren M; Ginley, David S; Gorman, Brian P
2018-05-01
Site-specific preparation of specimens using focused ion beam instruments for transmission electron microscopy is at the forefront of targeting regions of interest for nanoscale characterization. Typical methods of pinpointing desired features include electron backscatter diffraction for differentiating crystal structures and energy-dispersive X-Ray spectroscopy for probing compositional variations. Yet there are situations, notably in the titanium dioxide system, where these techniques can fail. Differentiating between the brookite and anatase polymorphs of titania is either excessively laborious or impossible with the aforementioned techniques. However, due to differences in bonding structure, Raman spectroscopy serves as an ideal candidate for polymorph differentiation. In this work, a correlative approach utilizing Raman spectroscopy for targeted focused ion beam specimen preparation was employed. Dark field imaging and diffraction in the transmission electron microscope confirmed the region of interest located via Raman spectroscopy and demonstrated the validity of this new method. Correlative Raman spectroscopy, scanning electron microscopy, and focused ion beam is shown to be a promising new technique for identifying site-specific preparation of nanoscale specimens in cases where conventional approaches do not suffice. Copyright © 2018 Elsevier B.V. All rights reserved.
Mangum, John S.; Chan, Lisa H.; Schmidt, Ute; ...
2018-02-23
Site-specific preparation of specimens using focused ion beam instruments for transmission electron microscopy is at the forefront of targeting regions of interest for nanoscale characterization. Typical methods of pinpointing desired features include electron backscatter diffraction for differentiating crystal structures and energy-dispersive X-Ray spectroscopy for probing compositional variations. Yet there are situations, notably in the titanium dioxide system, where these techniques can fail. Differentiating between the brookite and anatase polymorphs of titania is either excessively laborious or impossible with the aforementioned techniques. However, due to differences in bonding structure, Raman spectroscopy serves as an ideal candidate for polymorph differentiation. In thismore » work, a correlative approach utilizing Raman spectroscopy for targeted focused ion beam specimen preparation was employed. Dark field imaging and diffraction in the transmission electron microscope confirmed the region of interest located via Raman spectroscopy and demonstrated the validity of this new method. Correlative Raman spectroscopy, scanning electron microscopy, and focused ion beam is shown to be a promising new technique for identifying site-specific preparation of nanoscale specimens in cases where conventional approaches do not suffice.« less
Schmalz, Gerhard; Tsigaras, Sandra; Rinke, Sven; Kottmann, Tanja; Haak, Rainer; Ziebolz, Dirk
2016-07-01
The aim of this study was to compare the microbial analysis methods of polymerase chain reaction (PCR) and real-time PCR (RT-PCR) in terms of detection of five selected potentially periodontal pathogenic bacteria in peri-implant disease. Therefore 45 samples of healthy, mucositis and peri-implantitis (n = 15 each) were assessed according to presence of the following bacteria using PCR (DNA-strip technology) and RT-PCR (fluorescent dye SYBR green-system): Aggregatibacter actinomycetemcomitans (Aa), Porphyromonas gingivalis (Pg), Treponema denticola (Td), Tanerella forsythia (Tf), and Fusobacterium nucleatum (Fn). There were no significant correlations between the bacterial and disease patterns, so the benefit of using microbiological tests for the diagnosis of peri-implant diseases is questionable. Correlations between the methods were highest for Tf (Kendall's Tau: 0.65, Spearman: 0.78), Fn (0.49, 0.61) and Td (0.49, 0.59). For Aa (0.38, 0.42) and Pg (0.04, 0.04), lower correlation values were detected. Accordingly, conventional semi-quantitative PCR seems to be sufficient for analyzing potentially periodontal pathogenic bacterial species. Copyright © 2016 Elsevier Inc. All rights reserved.
Microscopic lymph node tumor burden quantified by macroscopic dual-tracer molecular imaging
Tichauer, Kenneth M.; Samkoe, Kimberley S.; Gunn, Jason R.; Kanick, Stephen C.; Hoopes, P. Jack; Barth, Richard J.; Kaufman, Peter A.; Hasan, Tayyaba; Pogue, Brian W.
2014-01-01
Lymph node biopsy (LNB) is employed in many cancer surgeries to identify metastatic disease and stage the cancer, yet morbidity and diagnostic delays associated with LNB could be avoided if non-invasive imaging of nodal involvement was reliable. Molecular imaging has potential in this regard; however, variable delivery and nonspecific uptake of imaging tracers has made conventional approaches ineffective clinically. A method of correcting for non-specific uptake with injection of a second untargeted tracer is presented, allowing tumor burden in lymph nodes to be quantified. The approach was confirmed in an athymic mouse model of metastatic human breast cancer targeting epidermal growth factor receptor, a cell surface receptor overexpressed by many cancers. A significant correlation was observed between in vivo (dual-tracer) and ex vivo measures of tumor burden (r = 0.97, p < 0.01), with an ultimate sensitivity of approximately 200 cells (potentially more sensitive than conventional LNB). PMID:25344739
NASA Technical Reports Server (NTRS)
Kojima, Jun; Nguyen, Quang-Viet
2007-01-01
An alternative optical thermometry technique that utilizes the low-resolution (order 10(exp 1)/cm) pure-rotational spontaneous Raman scattering of air is developed to aid single-shot multiscalar measurements in turbulent combustion studies. Temperature measurements are realized by correlating the measured envelope bandwidth of the pure-rotational manifold of the N2/O2 spectrum with a theoretical prediction of a species-weighted bandwidth. By coupling this thermometry technique with conventional vibrational Raman scattering for species determination, we demonstrate quantitative spatially resolved, single-shot measurements of the temperature and fuel/oxidizer concentrations in a high-pressure turbulent Cf4-air flame. Our technique provides not only an effective means of validating other temperature measurement methods, but also serves as a secondary thermometry technique in cases where the anti-Stokes vibrational N2 Raman signals are too low for a conventional vibrational temperature analysis.
Tuszyński, Paweł K.; Polak, Sebastian; Jachowicz, Renata; Mendyk, Aleksander; Dohnal, Jiří
2015-01-01
Different batches of atorvastatin, represented by two immediate release formulation designs, were studied using a novel dynamic dissolution apparatus, simulating stomach and small intestine. A universal dissolution method was employed which simulated the physiology of human gastrointestinal tract, including the precise chyme transit behavior and biorelevant conditions. The multicompartmental dissolution data allowed direct observation and qualitative discrimination of the differences resulting from highly pH dependent dissolution behavior of the tested batches. Further evaluation of results was performed using IVIVC/IVIVR development. While satisfactory correlation could not be achieved using a conventional deconvolution based-model, promising results were obtained through the use of a nonconventional approach exploiting the complex compartmental dissolution data. PMID:26120580
Quantum-optical coherence tomography with classical light.
Lavoie, J; Kaltenbaek, R; Resch, K J
2009-03-02
Quantum-optical coherence tomography (Q-OCT) is an interferometric technique for axial imaging offering several advantages over conventional methods. Chirped-pulse interferometry (CPI) was recently demonstrated to exhibit all of the benefits of the quantum interferometer upon which Q-OCT is based. Here we use CPI to measure axial interferograms to profile a sample accruing the important benefits of Q-OCT, including automatic dispersion cancellation, but with 10 million times higher signal. Our technique solves the artifact problem in Q-OCT and highlights the power of classical correlation in optical imaging.
Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M
2012-07-01
Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.
Sehgal, Arvind; Doctor, Tejas; Menahem, Samuel
2014-12-01
Existing data suggest subendocardial ischemia in preterm infants with patent ductus arteriosus (PDA) and alterations in cardiac function after indomethacin administration. This study aimed to explore the evolution of left ventricular function by conventional echocardiography and speckle-tracking echocardiography (STE) and to ascertain the interrelationship with coronary flow indices in response to indomethacin. A prospective observational study was performed with preterm infants receiving indomethacin for medical closure of PDA. Serial echocardiography was performed, and the results were analyzed using analysis of variance. Intra- and interobserver variability was assessed using the intraclass correlation coefficient. Indomethacin was administered to 18 infants born at a median gestational age of 25.8 weeks (interquartile range [IQR], 24.2-28.1 weeks) with a birth weight of 773 g (IQR, 704-1,002 g). The median age of the infants was 7.5 days (IQR, 4-17). Global longitudinal strain (GLS) values significantly decreased immediately after indomethacin infusion (preindomethacin GLS, -19.1 ± 2.4 % vs. -15.9 ± 1.7 %; p < 0.0001) but had improved at reassessment after 1 h (-17.4 ± 1.8 %). Conventional echocardiographic indices did not show significant alterations. A significant increase in arterial resistance in the coronary vasculature from 1.7 to 2.4 mmHg/cm/s was demonstrated. A significant correlation was noted between peak systolic GLS and flow resistance in the coronary vasculature. Significant changes in myocardial indices were observed immediately after indomethacin infusion. Compared with conventional methods, STE is a more sensitive tool to facilitate understanding of hemodynamics in preterm infants.
Hasan, Shadi W; Elektorowicz, Maria; Oleszkiewicz, Jan A
2012-09-01
The influence of sludge properties in SMEBR and conventional MBR pilot systems on membrane fouling was investigated. Generated data were analyzed using statistical analysis Pearson's product momentum correlation coefficient (r(p)). Analysis showed that TMP had strong direct (r(p)=0.9182) and inverse (r(p)=-0.9205) correlations to mean particle size diameter in MBR and SMEBR, respectively. TMP in SMEBR had a strong direct correlation to the sludge mixed liquor suspended solids concentration (MLSS) (r(p)=0.7757) while a weak direct correlation (r(p)=0.1940) was observed in MBR. SMEBR showed a moderate inverse correlation (r(p)=-0.6118) between TMP and soluble carbohydrates (EPS(c)) and a very weak direct correlation (r(p)=0.3448) to soluble proteins (EPS(p)). Conversely, EPS(p) in MBR had more significant impact (r(p)=0.4856) on membrane fouling than EPS(c) (r(p)=0.3051). The results provide insight into optimization of operational conditions in SMEBR system to overcome membrane fouling. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Burkard, R.; Jones, S.; Jones, T.
1994-01-01
Rate-dependent changes in the chick brain-stem auditory evoked response (BAER) using conventional averaging and a cross-correlation technique were investigated. Five 15- to 19-day-old white leghorn chicks were anesthetized with Chloropent. In each chick, the left ear was acoustically stimulated. Electrical pulses of 0.1-ms duration were shaped, attenuated, and passed through a current driver to an Etymotic ER-2 which was sealed in the ear canal. Electrical activity from stainless-steel electrodes was amplified, filtered (300-3000 Hz) and digitized at 20 kHz. Click levels included 70 and 90 dB peSPL. In each animal, conventional BAERs were obtained at rates ranging from 5 to 90 Hz. BAERs were also obtained using a cross-correlation technique involving pseudorandom pulse sequences called maximum length sequences (MLSs). The minimum time between pulses, called the minimum pulse interval (MPI), ranged from 0.5 to 6 ms. Two BAERs were obtained for each condition. Dependent variables included the latency and amplitude of the cochlear microphonic (CM), wave 2 and wave 3. BAERs were observed in all chicks, for all level by rate combinations for both conventional and MLS BAERs. There was no effect of click level or rate on the latency of the CM. The latency of waves 2 and 3 increased with decreasing click level and increasing rate. CM amplitude decreased with decreasing click level, but was not influenced by click rate for the 70 dB peSPL condition. For the 90 dB peSPL click, CM amplitude was uninfluenced by click rate for conventional averaging. For MLS BAERs, CM amplitude was similar to conventional averaging for longer MPIs.(ABSTRACT TRUNCATED AT 250 WORDS).
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Opanchuk, B.; Drummond, P. D.
2013-04-01
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Wong, M S; Cheng, J C Y; Wong, M W; So, S F
2005-04-01
A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.
Evaluation of permeable fractures in rock aquifers
NASA Astrophysics Data System (ADS)
Bok Lee, Hang
2015-04-01
In this study, the practical usefulness and fundamental applicability of a self-potential (SP) method for identifying the permeable fractures were evaluated by a comparison of SP methods with other geophysical logging methods and hydraulic tests. At a 10 m-shallow borehole in the study site, the candidates of permeable fractures crossing the borehole were first determined by conventional geophysical methods such as an acoustic borehole televiwer, temperature, electrical conductivity and gamma-gamma loggings, which was compared to the analysis by the SP method. Constant pressure injection and recovery tests were conducted for verification of the hydraulic properties of the fractures identified by various logging methods. The acoustic borehole televiwer and gamma-gamma loggings detected the open space or weathering zone within the borehole, but they cannot prove the possibility of a groundwater flow through the detected fractures. The temperature and electrical conductivity loggings had limitations to detect the fractured zones where groundwater in the borehole flows out to the surrounding rock aquifers. Comparison of results from different methods showed that there is a best correlation between the distribution of hydraulic conductivity and the variation of the SP signals, and the SP logging can estimate accurately the hydraulic activity as well as the location of permeable fractures. Based on the results, the SP method is recommended for determining the hydraulically-active fractures rather than other conventional geophysical loggings. This self-potential method can be effectively applied in the initial stage of a site investigation which selects the optimal location and evaluates the hydrogeological property of fractures in target sites for the underground structure including the geothermal reservoir and radioactive waste disposal.
Løgstrup, B B; Nielsen, J M; Kim, W Y; Poulsen, S H
2016-09-01
The clinical diagnosis of acute myocarditis is based on symptoms, electrocardiography, elevated myocardial necrosis biomarkers, and echocardiography. Often, conventional echocardiography reveals no obvious changes in global cardiac function and therefore has limited diagnostic value. Myocardial deformation imaging by echocardiography is an evolving method used to characterize quantitatively longitudinal systolic function, which may be affected in acute myocarditis. The aim of our study was to assess the utility of echocardiographic deformation imaging of the left ventricle in patients with diagnosed acute myocarditis in whom cardiovascular magnetic resonance (CMR) evaluation was performed. We included 28 consecutive patients (mean age 32 ± 13 years) with CMR-verified diagnosis of acute myocarditis according to the Lake Louise criteria. Cardiac function was evaluated by a comprehensive assessment of left ventricular (LV) function, including 2D speckle-tracking echocardiography. We found no significant correlation between the peak values of cardiac enzymes and the amount of myocardial oedema assessed by CMR (troponin: r= 0.3; P = 0.05 and CK-MB: r = 0.1; P = 0.3). We found a larger amount of myocardial oedema in the basal part of the left ventricle [American Heart Association (AHA) segments 1-6] in inferolateral and inferior segments, compared with the anterior, anterolateral, anteroseptal, and inferoseptal segments. In the mid LV segments (AHA segments 7-12), this was more pronounced in the anterior, anterolateral, and inferolateral segments. Among conventional echocardiographic parameters, LV function was not found to correlate with the amount of myocardial oedema of the left ventricle. In contrast, we found the wall motion score index to be significantly correlated with the amount of myocardial oedema, but this correlation was only present in patients with an extensive amount of oedema (>11% of the total left ventricle). Global longitudinal systolic myocardial strain correlated significantly with the amount of oedema (r = 0.65; P < 0.001). We found that both the epicardial longitudinal and the endocardial longitudinal systolic strains were significantly correlated with oedema (r = 0.55; P = 0.003 and r = 0.54; P < 0.001). In patients with acute myocarditis, 2D speckle-tracking echocardiography was a useful tool in the diagnostic process of acute myocarditis. Global longitudinal strain adds important information that can support clinical and conventional echocardiographic evaluation, especially in patients with preserved LV ejection fraction in relation to the diagnosis and degree of myocardial dysfunction. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Murri, L; Gori, S; Massetani, R; Bonanni, E; Marcella, F; Milani, S
1998-06-01
The sensitivity of quantitative electroencephalogram (EEG) was compared with that of conventional EEG in patients with acute ischaemic stroke. In addition, a correlation between quantitative EEG data and computerized tomography (CT) scan findings was carried out for all the areas of lesion in order to reassess the actual role of EEG in the evaluation of stroke. Sixty-five patients were tested with conventional and quantitative EEG within 24 h from the onset of neurological symptoms, whereas CT scan was performed within 4 days from the onset of stroke. EEG was recorded from 19 electrodes placed upon the scalp according to the International 10-20 System. Spectral analysis was carried out on 30 artefact-free 4-sec epochs. For each channel absolute and relative power were calculated for the delta, theta, alpha and beta frequency bands and such data were successively represented in colour-coded maps. Ten patients with extensive lesions documented by CT scan were excluded. The results indicated that conventional EEG revealed abnormalities in 40 of 55 cases, while EEG mapping showed abnormalities in 46 of 55 cases: it showed focal abnormalities in five cases and nonfocal abnormalities in one of six cases which had appeared to be normal according to visual inspection of EEG. In a further 11 cases, where the conventional EEG revealed abnormalities in one hemisphere, the quantitative EEG and maps allowed to further localize abnormal activity in a more localized way. The sensitivity of both methods was higher for frontocentral, temporal and parieto-occipital cortical-subcortical infarctions than for basal ganglia and internal capsule lesions; however, quantitative EEG was more efficient for all areas of lesion in detecting cases that had appeared normal by visual inspection and was clearly superior in revealing focal abnormalities. When we considered the electrode related to which the maximum power of the delta frequency band is recorded, a fairly close correlation was found between the localization of the maximum delta power and the position of lesions documented by CT scan for all areas of lesion excepting those located in the striatocapsular area.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition
NASA Astrophysics Data System (ADS)
Li, Jin; Liu, Zilong
2017-12-01
Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images
Naz, Saba; Sherazi, Sayed Tufail Hussain; Talpur, Farah N; Mahesar, Sarfaraz A; Kara, Huseyin
2012-01-01
A simple, rapid, economical, and environmentally friendly analytical method was developed for the quantitative assessment of free fatty acids (FFAs) present in deodorizer distillates and crude oils by single bounce-attenuated total reflectance-FTIR spectroscopy. Partial least squares was applied for the calibration model based on the peak region of the carbonyl group (C=O) from 1726 to 1664 cm(-1) associated with the FFAs. The proposed method totally avoided the use of organic solvents or costly standards and could be applied easily in the oil processing industry. The accuracy of the method was checked by comparison to a conventional standard American Oil Chemists' Society (AOCS) titrimetric procedure, which provided good correlation (R = 0.99980), with an SD of +/- 0.05%. Therefore, the proposed method could be used as an alternate to the AOCS titrimetric method for the quantitative determination of FFAs especially in deodorizer distillates.
Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit
2017-03-01
Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.
Roos, Matthias; Hofmann, Marius; Link, Susanne; Ott, Maria; Balbach, Jochen; Rössler, Ernst; Saalwächter, Kay; Krushelnitsky, Alexey
2015-12-01
Inter-protein interactions in solution affect the auto-correlation function of Brownian tumbling not only in terms of a simple increase of the correlation time, they also lead to the appearance of a weak slow component ("long tail") of the correlation function due to a slowly changing local anisotropy of the microenvironment. The conventional protocol of correlation time estimation from the relaxation rate ratio R1/R2 assumes a single-component tumbling correlation function, and thus can provide incorrect results as soon as the "long tail" is of relevance. This effect, however, has been underestimated in many instances. In this work we present a detailed systematic study of the tumbling correlation function of two proteins, lysozyme and bovine serum albumin, at different concentrations and temperatures using proton field-cycling relaxometry combined with R1ρ and R2 measurements. Unlike high-field NMR relaxation methods, these techniques enable a detailed study of dynamics on a time scale longer than the normal protein tumbling correlation time and, thus, a reliable estimate of the parameters of the "long tail". In this work we analyze the concentration dependence of the intensity and correlation time of the slow component and perform simulations of high-field (15)N NMR relaxation data demonstrating the importance of taking the "long tail" in the analysis into account.
Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel
2015-01-01
The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju
2015-01-01
Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982
Can Functional Cardiac Age be Predicted from ECG in a Normal Healthy Population
NASA Technical Reports Server (NTRS)
Schlegel, Todd; Starc, Vito; Leban, Manja; Sinigoj, Petra; Vrhovec, Milos
2011-01-01
In a normal healthy population, we desired to determine the most age-dependent conventional and advanced ECG parameters. We hypothesized that changes in several ECG parameters might correlate with age and together reliably characterize the functional age of the heart. Methods: An initial study population of 313 apparently healthy subjects was ultimately reduced to 148 subjects (74 men, 84 women, in the range from 10 to 75 years of age) after exclusion criteria. In all subjects, ECG recordings (resting 5-minute 12-lead high frequency ECG) were evaluated via custom software programs to calculate up to 85 different conventional and advanced ECG parameters including beat-to-beat QT and RR variability, waveform complexity, and signal-averaged, high-frequency and spatial/spatiotemporal ECG parameters. The prediction of functional age was evaluated by multiple linear regression analysis using the best 5 univariate predictors. Results: Ignoring what were ultimately small differences between males and females, the functional age was found to be predicted (R2= 0.69, P < 0.001) from a linear combination of 5 independent variables: QRS elevation in the frontal plane (p<0.001), a new repolarization parameter QTcorr (p<0.001), mean high frequency QRS amplitude (p=0.009), the variability parameter % VLF of RRV (p=0.021) and the P-wave width (p=0.10). Here, QTcorr represents the correlation between the calculated QT and the measured QT signal. Conclusions: In apparently healthy subjects with normal conventional ECGs, functional cardiac age can be estimated by multiple linear regression analysis of mostly advanced ECG results. Because some parameters in the regression formula, such as QTcorr, high frequency QRS amplitude and P-wave width also change with disease in the same direction as with increased age, increased functional age of the heart may reflect subtle age-related pathologies in cardiac electrical function that are usually hidden on conventional ECG.
Alshali, Ruwaida Z; Salim, Nesreen A; Satterthwaite, Julian D; Silikas, Nick
2015-02-01
To measure bottom/top hardness ratio of bulk-fill and conventional resin-composite materials, and to assess hardness changes after dry and ethanol storage. Filler content and kinetics of thermal decomposition were also tested using thermogravimetric analysis (TGA). Six bulk-fill (SureFil SDR, Venus bulk fill, X-tra base, Filtek bulk fill flowable, Sonic fill, and Tetric EvoCeram bulk-fill) and eight conventional resin-composite materials (Grandioso flow, Venus Diamond flow, X-flow, Filtek Supreme Ultra Flowable, Grandioso, Venus Diamond, TPH Spectrum, and Filtek Z250) were tested (n=5). Initial and 24h (post-cure dry storage) top and bottom microhardness values were measured. Microhardness was re-measured after the samples were stored in 75% ethanol/water solution. Thermal decomposition and filler content were assessed by TGA. Results were analysed using one-way ANOVA and paired sample t-test (α=0.05). All materials showed significant increase of microhardness after 24h of dry storage which ranged from 100.1% to 9.1%. Bottom/top microhardness ratio >0.9 was exhibited by all materials. All materials showed significant decrease of microhardness after 24h of storage in 75% ethanol/water which ranged from 14.5% to 74.2%. The extent of post-irradiation hardness development was positively correlated to the extent of ethanol softening (R(2)=0.89, p<0.001). Initial thermal decomposition temperature assessed by TGA was variable and was correlated to ethanol softening. Bulk-fill resin-composites exhibit comparable bottom/top hardness ratio to conventional materials at recommended manufacturer thickness. Hardness was affected to a variable extent by storage with variable inorganic filler content and initial thermal decomposition shown by TGA. The manufacturer recommended depth of cure of bulk-fill resin-composites can be reached based on the microhardness method. Characterization of the primary polymer network of a resin-composite material should be considered when evaluating its stability in the aqueous oral environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Measurement time and statistics for a noise thermometer with a synthetic-noise reference
NASA Astrophysics Data System (ADS)
White, D. R.; Benz, S. P.; Labenski, J. R.; Nam, S. W.; Qu, J. F.; Rogalla, H.; Tew, W. L.
2008-08-01
This paper describes methods for reducing the statistical uncertainty in measurements made by noise thermometers using digital cross-correlators and, in particular, for thermometers using pseudo-random noise for the reference signal. First, a discrete-frequency expression for the correlation bandwidth for conventional noise thermometers is derived. It is shown how an alternative frequency-domain computation can be used to eliminate the spectral response of the correlator and increase the correlation bandwidth. The corresponding expressions for the uncertainty in the measurement of pseudo-random noise in the presence of uncorrelated thermal noise are then derived. The measurement uncertainty in this case is less than that for true thermal-noise measurements. For pseudo-random sources generating a frequency comb, an additional small reduction in uncertainty is possible, but at the cost of increasing the thermometer's sensitivity to non-linearity errors. A procedure is described for allocating integration times to further reduce the total uncertainty in temperature measurements. Finally, an important systematic error arising from the calculation of ratios of statistical variables is described.
Measurement of Lp(a) with a two-step monoclonal competitive sandwich ELISA method.
Morikawa, W; Iki, R; Terano, T; Funatsu, A; Sugiuchi, H; Uji, Y; Okabe, H
1995-06-01
To evaluate the results of Lipoprotein (a)[Lp(a)] measurements by a competitive two-step monoclonal enzyme-linked immuno sorbent assay method comparing them with those by a conventional ELISA. Serum having various isoforms of Lp(a) and purified Lp(a) were assayed using the method described here and commercially available kits. The reference range was determined with the use of 324 normal subjects by means of calculation from Lp(a) results of logarithmic transformation. Our method takes advantage of a competitive reaction between fixed antibody and free antibody to Lp(a), having the detection range up to 1000 mg/L with the lowest detection limit of 2 mg/L. The anti-Lp(a) monoclonal antibody employed in the assay system reacts uniformly with all phenotypes of Lp(a) but showing very low cross-reactivity for plasminogen and LDL. Within-run and between-run precisions were excellent, giving CVs of 2.9 and 4.0% with mean values of 145 and 635 mg/L, respectively. In comparison of the results by our method with those by a polyclonal method (Biopool) or a monoclonal antibody method (Terumo), they correlated well; Y (our method) = 0.99 x (polyclonal method, Biopool) - 1.9, r = 0.994 (n = 60), and Y = 0.94 X(monoclonal method, Terumo) -9.8, r = 0.97 (n = 60), respectively. The reference range was 105.9 +/- 25.4 mg/L, the difference between the sexes was not significant. Our method has proven highly accurate and specific. It is applicable with auto analyzer because it does not require such a pre-dilution step as is necessary for Lp(a) determination by conventional ELISA assay. Accordingly, we can conclude that our test method is workable for both clinical laboratories and mass screening.
Comparing 3D foot scanning with conventional measurement methods.
Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J
2014-01-01
Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.
Sakai, Hiroyuki; Nagao, Hidenori; Sakurai, Mamoru; Okumura, Takako; Nagai, Yoshiyuki; Shikuma, Junpei; Ito, Rokuro; Imazu, Tetsuya; Miwa, Takashi; Odawara, Masato
2015-01-01
For measuring serum 3,3',5'-triiodothyronine (rT3) levels, radioimmunoassay (RIA) has traditionally been used owing to the lack of other reliable methods; however, it has recently become difficult to perform. Meanwhile, liquid chromatography-tandem mass spectrometry (LC-MS/MS) has recently been attracting attention as a novel alternative method in clinical chemistry. To the best of our knowledge, there are no studies to date comparing results of the quantification of human serum rT3 between LC-MS/MS and RIA. We therefore examined the feasibility of LC-MS/MS as a novel alternative method for measuring serum rT3, thyroxine (T4), and 3,5,3'-triiodothyronine (T3) levels. Assay validation was performed by LC-MS/MS using quality control samples of rT3, T4, and T3 at 4 various concentrations which were prepared from reference compounds. Serum samples of 50 outpatients in our department were quantified both by LC-MS/MS and conventional immunoassay for rT3, T4, and T3. Correlation coefficients between the 2 measurement methods were statistically analyzed respectively. Matrix effects were not observed with our method. Intra-day and inter-day precisions were less than 10.8% and 9.6% for each analyte at each quality control level, respectively. Intra-day and inter-day accuracies were between 96.2% and 110%, and between 98.3% and 108.6%, respectively. The lower limit of quantification was 0.05 ng/mL. Strong correlations were observed between the 2 measurement methods (correlation coefficient, T4: 0.976, p < 0.001; T3: 0.912, p < 0.001; rT3: 0.928, p < 0.001). Our LC-MS/MS system requires no manual cleanup operation, and the process after application of a sample is fully automated; furthermore, it was found to be highly sensitive, and superior in both precision and accuracy. The correlation between the 2 methods over a wide range of concentrations was strong. LC-MS/MS is therefore expected to become a useful tool for clinical diagnosis and research.
Non-invasive volumetric analysis of asymptomatic hands using a 3-D scanner
Shinkai, Hiroki; Yamamoto, Michiro; Tatebe, Masahiro; Iwatsuki, Katsuyuki; Kurimoto, Shigeru; Hirata, Hitoshi
2017-01-01
Hand swelling is one of the symptoms often seen in practice, but none of the available morphometric methods can quickly and efficiently quantify hand volume in an objective manner, and the current gold-standard volume measurement requires immersion in water, which can be difficult to use. Therefore, we aimed to analyze the accuracy of using 3-dimensional (3-D) scanning to measure hand volume. First, we compared the hand volume calculated using the 3-D scanner to that calculated from the conventional method among 109 volunteers to determine the reliability of 3-D measurements. We defined the beginning of the hand as the distal wrist crease, and 3-D forms of the hands were captured by the 3-D scanning system. Second, 238 volunteers (87 men, 151 women) with no disease or history of hand surgery underwent 3-D scanning. Data collected included age, height, weight, and shoe size. The wrist circumference (WC) and the distance between distal wrist crease and tip of middle finger (DDT) were measured. Statistical analyses were performed using linear regression to investigate the relationship between the hand volume and these parameters. In the first study, a significantly strong positive correlation was observed [R = 0.98] between the hand volume calculated via 3-D scanning and that calculated via the conventional method. In the second study, no significant differences between the volumes, WC or DDT of right and left hands were found. The correlations of hand volume with weight, WC, and DDT were strong. We created a formula to predict the hand volume using these parameters; these variables explained approximately 80% of the predicted volume. We confirmed that the new 3-D scanning method, which is performed without touching the hand and can record the form of the hand, yields an accurate volumetric analysis of an asymptomatic hand. PMID:28796816
Holdstock, G; Phillips, G; Hames, T K; Condon, B R; Fleming, J S; Smith, C L; Ackery, D M
1985-01-01
The absorption of 75Se-23-selena-25-homotaurocholate (SeHCAT) was compared with vitamin-B12 absorption and conventional radiography in 44 patients with inflammatory bowel disease. The retention of SeHCAT was normal in 11 patients with ulcerative colitis but was abnormally low in 9 patients with terminal-ileal resection, 9 out of 14 patients with small-bowel Crohn's disease and in 2 out of 10 patients with Crohn's colitis. The 5 patients with small-bowel Crohn's disease and normal retention had either inactive disease or no radiological evidence of terminal ileal involvement. Measurements of the absorption of vitamin B12 did not discriminate between these groups, and there was very poor correlation between B12 and SeHCAT absorption (r = 0.506, P less than 0.05). There was extremely good correlation of SeHCAT retention measured using a wholebody counter with that measured using an uncollimated gamma camera (r = 0.96, P less than 0.001). The results suggest that SeHCAT retention may prove complementary to conventional methods of assessing small-bowel disease in patients with inflammatory bowel disease. As measurement by gamma camera is feasible, this test can be used in most departments of nuclear medicine.
Freitas, R; Nero, L A; Carvalho, A F
2009-07-01
Enumeration of mesophilic aerobes (MA) is the main quality and hygiene parameter for raw and pasteurized milk. High levels of these microorganisms indicate poor conditions in production, storage, and processing of milk, and also the presence of pathogens. Fifteen raw and 15 pasteurized milk samples were submitted for MA enumeration by a conventional plating method (using plate count agar) and Petrifilm Aerobic Count plates (3M, St. Paul, MN), followed by incubation according to 3 official protocols: IDF/ISO (incubation at 30 degrees C for 72 h), American Public Health Association (32 degrees C for 48 h), and Brazilian Ministry of Agriculture (36 degrees C for 48 h). The results were compared by linear regression and ANOVA. Considering the results from conventional methodology, good correlation indices and absence of significant differences between mean counts were observed, independent of type of milk sample (raw or pasteurized) and incubation conditions (IDF/ISO, American Public Health Association, or Ministry of Agriculture). Considering the results from Petrifilm Aerobic Count plates, good correlation indices and absence of significant differences were only observed for raw milk samples. The microbiota of pasteurized milk interfered negatively with the performance of Petrifilm Aerobic Count plates, probably because of the presence of microorganisms that poorly reduce the dye indicator of this system.
Lee, Hyo-Seol; Lee, Eun-Song; Kang, Si-Mook; Lee, Jae-Ho; Choi, Hyung-Jun; Kim, Baek-Il
The aim of this study was to assess the validity of a new caries activity test that uses dental plaque acidogenicity in children with deciduous dentition. Ninety-two children under the age of three years old underwent clinical examination using the dft index and examinations with two caries activity tests. Plaque samples for the new Cariview(®) test and the saliva sample for the conventional Dentocult SM(®) test were collected, incubated, and scored according to each manufacturers' instruction. The data were analysed using ANOVA and Spearman correlation analyses to evaluate the relationships between the test results and the caries experience. The mean dft index of all of the subjects was 4.73, and 17.4% of the subjects were caries-free. The levels of caries risk based on the new Cariview test score significantly increased with the caries experience (p < 0.01). The test results revealed a stronger correlation with caries indices (dft and dt index) than the conventional SM colony counting method (r = 0.43, r = 0.39, p < 0.01). The new caries activity test to analyse the acidogenic potential of whole microorganisms from dental plaques can be used to evaluate caries risk in children with deciduous teeth.
Yong, Hua-Hie; Borland, Ron; Balmford, James; Hitchman, Sara C; Cummings, K Michael; Driezen, Pete; Thompson, Mary E
2017-02-01
The rapid rise in electronic cigarettes (ECs) globally has stimulated much debate about the relative risk and public health impact of this new emerging product category as compared to conventional cigarettes. The sale and marketing of ECs containing nicotine are banned in many countries (eg, Australia) but are allowed in others (eg, United Kingdom). This study examined prevalence and correlates of the belief that ECs are a lot less harmful than conventional cigarettes under the different regulatory environments in Australia (ie, more restrictive) and the United Kingdom (ie, less restrictive). Australian and UK data from the 2013 survey of the International Tobacco Control Four-Country project were analyzed. More UK than Australian respondents (58.5% vs. 35.2%) believed that ECs are a lot less harmful than conventional cigarettes but more respondents in Australia than in the United Kingdom selected "Don't Know" (36.5% vs. 17.1%). The proportion that responded "A little less, equally or more harmful" did not differ between countries. Correlates of the belief that ECs are "A lot less harmful" differed between countries, while correlates of "Don't Know" response did not differ. Consistent with the less restrictive regulatory environment affecting the sale and marketing of ECs, smokers and recent ex-smokers in the United Kingdom were more likely to believe ECs were less harmful relative to conventional cigarettes compared to those in Australia. What this study adds: Among smokers and ex-smokers, this study found that the belief that ECs are (a lot) less harmful than conventional cigarettes was considerably higher in the United Kingdom than in Australia in 2013. The finding is consistent with the less restrictive regulatory environment for ECs in the United Kingdom, suggesting that the regulatory framework for ECs adopted by a country can affect smokers' perceptions about the relative harmfulness of ECs, the group that stands to gain the most from having an accurate belief about the relative harms of ECs. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco.
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, Jose M., E-mail: joseman@sas.upenn.edu; Plaza, Cesar; Polo, Alfredo
2012-01-15
Highlights: Black-Right-Pointing-Pointer Thermal analysis was used to assess stability and composition of organic matter in three diverse municipal waste streams. Black-Right-Pointing-Pointer Results were compared with C mineralization during 90-day incubation, FTIR and {sup 13}C NMR. Black-Right-Pointing-Pointer Thermal analysis reflected the differences between the organic wastes before and after the incubation. Black-Right-Pointing-Pointer The calculated energy density showed a strong correlation with cumulative respiration. Black-Right-Pointing-Pointer Conventional and thermal methods provide complimentary means of characterizing organic wastes. - Abstract: The use of organic municipal wastes as soil amendments is an increasing practice that can divert significant amounts of waste from landfill, and providesmore » a potential source of nutrients and organic matter to ameliorate degraded soils. Due to the high heterogeneity of organic municipal waste streams, it is difficult to rapidly and cost-effectively establish their suitability as soil amendments using a single method. Thermal analysis has been proposed as an evolving technique to assess the stability and composition of the organic matter present in these wastes. In this study, three different organic municipal waste streams (i.e., a municipal waste compost (MC), a composted sewage sludge (CS) and a thermally dried sewage sludge (TS)) were characterized using conventional and thermal methods. The conventional methods used to test organic matter stability included laboratory incubation with measurement of respired C, and spectroscopic methods to characterize chemical composition. Carbon mineralization was measured during a 90-day incubation, and samples before and after incubation were analyzed by chemical (elemental analysis) and spectroscopic (infrared and nuclear magnetic resonance) methods. Results were compared with those obtained by thermogravimetry (TG) and differential scanning calorimetry (DSC) techniques. Total amounts of CO{sub 2} respired indicated that the organic matter in the TS was the least stable, while that in the CS was the most stable. This was confirmed by changes detected with the spectroscopic methods in the composition of the organic wastes due to C mineralization. Differences were especially pronounced for TS, which showed a remarkable loss of aliphatic and proteinaceous compounds during the incubation process. TG, and especially DSC analysis, clearly reflected these differences between the three organic wastes before and after the incubation. Furthermore, the calculated energy density, which represents the energy available per unit of organic matter, showed a strong correlation with cumulative respiration. Results obtained support the hypothesis of a potential link between the thermal and biological stability of the studied organic materials, and consequently the ability of thermal analysis to characterize the maturity of municipal organic wastes and composts.« less
Yamamoto, Naoyuki; Kawashima, Natsumi; Kitazaki, Tomoya; Mori, Keita; Kang, Hanyue; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro
2018-05-01
Smart toilets could be used to monitor different components of urine in daily life for early detection of lifestyle-related diseases and prompt provision of treatment. For analysis of biological samples such as urine by midinfrared spectroscopy, thin-film samples like liquid cells are needed because of the strong absorption of midinfrared light by water. Conventional liquid cells or fixed cells are prepared based on the liquid membrane method and solution technique, but these are not quantitative and are difficult to set up and clean. We generated an ultrasonic standing wave reflection plane in a sample and produced an ultrasonic liquid cell. In this cell, the thickness of the optical path length was adjustable, as in the conventional method. The reflection plane could be generated at an arbitrary depth and internal reflected light could be detected by changing the frequency of the ultrasonic wave. We could generate refractive index boundaries using the density difference created by the ultrasonic standing wave. Creation of the reflection plane in the sample was confirmed by optical coherence tomography. Using the proposed method and midinfrared spectroscopy, we discriminated between normal urine samples spiked with glucose at different concentrations and obtained a high correlation coefficient. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Liu, Dean-Mo; Chen, I-Wei
2001-01-01
The present invention provides a process for the encapsulation of biologically important proteins into transparent, porous silica matrices by an alcohol-free, aqueous, colloidal sol-gel process, and to the biological materials encapsulated thereby. The process is exemplified by studies involving encapsulated cytochrome c, catalase, myoglobin, and hemoglobin, although non-proteinaceous biomaterials, such as active DNA or RNA fragments, cells or even tissues, may also be encapsulated in accordance with the present methods. Conformation, and hence activity of the biomaterial, is successfully retained after encapsulation as demonstrated by optical characterization of the molecules, even after long-term storage. The retained conformation of the biomaterial is strongly correlated to both the rate of gelation and the subsequent drying speed of the encapsulatng matrix. Moreover, in accordance with this process, gelation is accelerated by the use of a higher colloidal solid concentration and a lower synthesis pH than conventional methods, thereby enhancing structural stability and retained conformation of the biomaterials. Thus, the invention also provides a remarkable improvement in retaining the biological activity of the encapsulated biomaterial, as compared with those involved in conventional alkoxide-based processes. It further provides new methods for the quantitative and qualitative detection of test substances that are reactive to, or catalyzed by, the active, encapsulated biological materials.
Embedded correlated wavefunction schemes: theory and applications.
Libisch, Florian; Huang, Chen; Carter, Emily A
2014-09-16
Conspectus Ab initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited. By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism. In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our density functional embedding theory, which is formally exact. We show how to determine the embedding potential, which replaces the interaction between subsystems, at the DFT level. CW calculations are performed using a fixed embedding potential, that is, a non-self-consistent embedding scheme. We demonstrate this embedding theory for two challenging electron transfer phenomena: (1) initial oxidation of an aluminum surface and (2) hot-electron-mediated dissociation of hydrogen molecules on a gold surface. In both cases, the interaction between gas molecules and metal surfaces were treated by sophisticated CW techniques, with the remainder of the extended metal surface being treated by DFT. Our embedding approach overcomes the limitations of conventional Kohn-Sham DFT in describing charge transfer, multiconfigurational character, and excited states. From these embedding simulations, we gained important insights into fundamental processes that are crucial aspects of fuel cell catalysis (i.e., O2 reduction at metal surfaces) and plasmon-mediated photocatalysis by metal nanoparticles. Moreover, our findings agree very well with experimental observations, while offering new views into the chemistry. We finally discuss our recently formulated potential-functional embedding theory that provides a seamless, first-principles way to include back-action onto the environment from the embedded region.
NASA Astrophysics Data System (ADS)
Tang, Xiaoxing; Qian, Yuan; Guo, Yanchuan; Wei, Nannan; Li, Yulan; Yao, Jian; Wang, Guanghua; Ma, Jifei; Liu, Wei
2017-12-01
A novel method has been improved for analyzing atmospheric pollutant metals (Be, Mn, Fe, Co, Ni, Cu, Zn, Se, Sr, Cd, and Pb) by laser ablation inductively coupled plasma mass spectrometry. In this method, solid standards are prepared by depositing droplets of aqueous standard solutions on the surface of a membrane filter, which is the same type as used for collecting atmospheric pollutant metals. Laser parameters were optimized, and ablation behaviors of the filter discs were studied. The mode of radial line scans across the filter disc was a representative ablation strategy and can avoid error from the inhomogeneous filter standards and marginal effect of the filter disc. Pt, as the internal standard, greatly improved the correlation coefficient of the calibration curve. The developed method provides low detection limits, from 0.01 ng m- 3 for Be and Co to 1.92 ng m- 3 for Fe. It was successfully applied for the determination of atmospheric pollutant metals collected in Lhasa, China. The analytical results showed good agreement with those obtained by conventional liquid analysis. In contrast to the conventional acid digestion procedure, the novel method not only greatly reduces sample preparation and shortens the analysis time but also provides a possible means for studying the spatial distribution of atmospheric filter samples.
Li, Chao; Xie, Hong-Bin; Chen, Jingwen; Yang, Xianhai; Zhang, Yifei; Qiao, Xianliang
2014-12-02
Short chain chlorinated paraffins (SCCPs) are under evaluation for inclusion in the Stockholm Convention on persistent organic pollutants. However, information on their reaction rate constants with gaseous ·OH (kOH) is unavailable, limiting the evaluation of their persistence in the atmosphere. Experimental determination of kOH is confined by the unavailability of authentic chemical standards for some SCCP congeners. In this study, we evaluated and selected density functional theory (DFT) methods to predict kOH of SCCPs, by comparing the experimental kOH values of six polychlorinated alkanes (PCAs) with those calculated by the different theoretical methods. We found that the M06-2X/6-311+G(3df,2pd)//B3LYP/6-311 +G(d,p) method is time-effective and can be used to predict kOH of PCAs. Moreover, based on the calculated kOH of nine SCCPs and available experimental kOH values of 22 PCAs with low carbon chain, a quantitative structure-activity relationship (QSAR) model was developed. The molecular structural characteristics determining the ·OH reaction rate were discussed. logkOH was found to negatively correlate with the percentage of chlorine substitutions (Cl%). The DFT calculation method and the QSAR model are important alternatives to the conventional experimental determination of kOH for SCCPs, and are prospective in predicting their persistence in the atmosphere.
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F.; Du, Yiping; Wald, Lawrence L.; Griswold, Mark A.; Setsompop, Kawin
2015-01-01
Purpose We incorporate Simultaneous Multi-Slice (SMS) acquisition into MR Fingerprinting (MRF) to accelerate the MRF acquisition. Methods The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Further, by varying the Gz blip area and/or polarity as a function of TR, the slices’ differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. Results In Monte Carlo simulation, the parameter mapping from Multi-band factor (MB)=2 t-Blipped SMS-MRF shows good accuracy and precision when compared to results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. Conclusions The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. PMID:26059430
Functional MRI registration with tissue-specific patch-based functional correlation tensors.
Zhou, Yujia; Zhang, Han; Zhang, Lichi; Cao, Xiaohuan; Yang, Ru; Feng, Qianjin; Yap, Pew-Thian; Shen, Dinggang
2018-06-01
Population studies of brain function with resting-state functional magnetic resonance imaging (rs-fMRI) rely on accurate intersubject registration of functional areas. This is typically achieved through registration using high-resolution structural images with more spatial details and better tissue contrast. However, accumulating evidence has suggested that such strategy cannot align functional regions well because functional areas are not necessarily consistent with anatomical structures. To alleviate this problem, a number of registration algorithms based directly on rs-fMRI data have been developed, most of which utilize functional connectivity (FC) features for registration. However, most of these methods usually extract functional features only from the thin and highly curved cortical grey matter (GM), posing great challenges to accurate estimation of whole-brain deformation fields. In this article, we demonstrate that additional useful functional features can also be extracted from the whole brain, not restricted to the GM, particularly the white-matter (WM), for improving the overall functional registration. Specifically, we quantify local anisotropic correlation patterns of the blood oxygenation level-dependent (BOLD) signals using tissue-specific patch-based functional correlation tensors (ts-PFCTs) in both GM and WM. Functional registration is then performed by integrating the features from different tissues using the multi-channel large deformation diffeomorphic metric mapping (mLDDMM) algorithm. Experimental results show that our method achieves superior functional registration performance, compared with conventional registration methods. © 2018 Wiley Periodicals, Inc.
Interrelationship and limitations of conventional radiographic assessments of skeletal maturation.
Shim J, Jocelyne; Bogowicz, Paul; Heo, Giseon; Lagravère, Manuel O
2012-06-01
Assessments of skeletal maturation (ASM) are used by clinicians to optimize treatments for each patient. This study examines the interrelationship between and limitations of hand-wrist and cervical vertebrae maturation methods in adolescent patients. Radiographs (hand-wrist and lateral cephalometric) were obtained from patients (n=62, 11 to 17 years of age) at two-time periods (T1/T2) with time intervals of 9.75 to 16.50 months. Radiographs were scored using cervical vertebral maturation staging (CS) and Fishman's skeletal maturation indices (SMI). Functional data analysis was used to visually assess maturational changes of the cervical vertebrae. Both SMI and CS increased over the period of observation. Age was moderately correlated with SMI (0.707/0.651 at T1/T2) and mildly correlated with CS (0.431/0.314 at T1/T2). There was some evidence of gender variability in SMI. The correlations between SMI and CS were 0.513 and 0.372 at T1 and T2, respectively. Functional data analysis illustrated the difficulty in differentiating contiguous cervical stages. Discrepancies exist between both scoring methods. Further studies are needed to overcome the difficulties encountered with CS. Clinicians are advised to use ASM methods with caution in adolescent patients given the aforementioned discrepancies. Separate references for boys and girls are not required. Copyright © 2012. Published by Elsevier Masson SAS.
Kurashige, Yuki; Yanai, Takeshi
2011-09-07
We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics
Reliability of Wind Speed Data from Satellite Altimeter to Support Wind Turbine Energy
NASA Astrophysics Data System (ADS)
Uti, M. N.; Din, A. H. M.; Omar, A. H.
2017-10-01
Satellite altimeter has proven itself to be one of the important tool to provide good quality information in oceanographic study. Nowadays, most countries in the world have begun in implementation the wind energy as one of their renewable energy for electric power generation. Many wind speed studies conducted in Malaysia using conventional method and scientific technique such as anemometer and volunteer observing ships (VOS) in order to obtain the wind speed data to support the development of renewable energy. However, there are some limitations regarding to this conventional method such as less coverage for both spatial and temporal and less continuity in data sharing by VOS members. Thus, the aim of this research is to determine the reliability of wind speed data by using multi-mission satellite altimeter to support wind energy potential in Malaysia seas. Therefore, the wind speed data are derived from nine types of satellite altimeter starting from year 1993 until 2016. Then, to validate the reliability of wind speed data from satellite altimeter, a comparison of wind speed data form ground-truth buoy that located at Sabah and Sarawak is conducted. The validation is carried out in terms of the correlation, the root mean square error (RMSE) calculation and satellite track analysis. As a result, both techniques showing a good correlation with value positive 0.7976 and 0.6148 for point located at Sabah and Sarawak Sea, respectively. It can be concluded that a step towards the reliability of wind speed data by using multi-mission satellite altimeter can be achieved to support renewable energy.
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945
Shibata, Tomoyuki; Solo-Gabriele, Helena M; Sinigalliano, Christopher D; Gidley, Maribeth L; Plano, Lisa R W; Fleisher, Jay M; Wang, John D; Elmir, Samir M; He, Guoqing; Wright, Mary E; Abdelzaher, Amir M; Ortega, Cristina; Wanless, David; Garza, Anna C; Kish, Jonathan; Scott, Troy; Hollenbeck, Julie; Backer, Lorraine C; Fleming, Lora E
2010-11-01
The objectives of this work were to compare enterococci (ENT) measurements based on the membrane filter, ENT(MF) with alternatives that can provide faster results including alternative enterococci methods (e.g., chromogenic substrate (CS), and quantitative polymerase chain reaction (qPCR)), and results from regression models based upon environmental parameters that can be measured in real-time. ENT(MF) were also compared to source tracking markers (Staphylococcus aureus, Bacteroidales human and dog markers, and Catellicoccus gull marker) in an effort to interpret the variability of the signal. Results showed that concentrations of enterococci based upon MF (<2 to 3320 CFU/100 mL) were significantly different from the CS and qPCR methods (p < 0.01). The correlations between MF and CS (r = 0.58, p < 0.01) were stronger than between MF and qPCR (r ≤ 0.36, p < 0.01). Enterococci levels by MF, CS, and qPCR methods were positively correlated with turbidity and tidal height. Enterococci by MF and CS were also inversely correlated with solar radiation but enterococci by qPCR was not. The regression model based on environmental variables provided fair qualitative predictions of enterococci by MF in real-time, for daily geometric mean levels, but not for individual samples. Overall, ENT(MF) was not significantly correlated with source tracking markers with the exception of samples collected during one storm event. The inability of the regression model to predict ENT(MF) levels for individual samples is likely due to the different sources of ENT impacting the beach at any given time, making it particularly difficult to to predict short-term variability of ENT(MF) for environmental parameters.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.
Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S
2016-04-14
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2
NASA Astrophysics Data System (ADS)
Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.
2016-04-01
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Comparison of RNFL thickness and RPE-normalized RNFL attenuation coefficient for glaucoma diagnosis
NASA Astrophysics Data System (ADS)
Vermeer, K. A.; van der Schoot, J.; Lemij, H. G.; de Boer, J. F.
2013-03-01
Recently, a method to determine the retinal nerve fiber layer (RNFL) attenuation coefficient, based on normalization on the retinal pigment epithelium, was introduced. In contrast to conventional RNFL thickness measures, this novel measure represents a scattering property of the RNFL tissue. In this paper, we compare the RNFL thickness and the RNFL attenuation coefficient on 10 normal and 8 glaucomatous eyes by analyzing the correlation coefficient and the receiver operator curves (ROCs). The thickness and attenuation coefficient showed moderate correlation (r=0.82). Smaller correlation coefficients were found within normal (r=0.55) and glaucomatous (r=0.48) eyes. The full separation between normal and glaucomatous eyes based on the RNFL attenuation coefficient yielded an area under the ROC (AROC) of 1.0. The AROC for the RNFL thickness was 0.9875. No statistically significant difference between the two measures was found by comparing the AROC. RNFL attenuation coefficients may thus replace current RNFL thickness measurements or be combined with it to improve glaucoma diagnosis.
Measurement potential of laser speckle velocimetry
NASA Technical Reports Server (NTRS)
Adrian, R. J.
1982-01-01
Laser speckle velocimetry, the measurement of fluid velocity by measuring the translation of speckle pattern or individual particles that are moving with the fluid, is described. The measurement is accomplished by illuminating the fluid with consecutive pulses of Laser Light and recording the images of the particles or the speckles on a double exposed photographic plate. The plate contains flow information throughout the image plane so that a single double exposure may provide data at hundreds or thousands of points in the illuminated region of the fluid. Conventional interrogation of the specklegram involves illuminating the plate to form Young's fringes, whose spacing is inversely proportional to the speckle separation. Subsequently the fringes are digitized and analyzed in a computer to determine their frequency and orientation, yielding the velocity magnitude and orientation. The Young's fringe technique is equivalent to performing a 2-D spatial correlation of the double exposed specklegram intensity pattern, and this observation suggests that correlation should be considered as an alternative processing method. The principle of the correlation technique is examined.
Strongly correlated superconductivity and quantum criticality
NASA Astrophysics Data System (ADS)
Tremblay, A.-M. S.
Doped Mott insulators and doped charge-transfer insulators describe classes of materials that can exhibit unconventional superconducting ground states. Examples include the cuprates and the layered organic superconductors of the BEDT family. I present results obtained from plaquette cellular dynamical mean-field theory. Continuous-time quantum Monte Carlo evaluation of the hybridization expansion allows one to study the models in the large interaction limit where quasiparticles can disappear. The normal state which is unstable to the superconducting state exhibits a first-order transition between a pseudogap and a correlated metal phase. That transition is the finite-doping extension of the metal-insulator transition obtained at half-filling. This transition serves as an organizing principle for the normal and superconducting states of both cuprates and doped organic superconductors. In the less strongly correlated limit, these methods also describe the more conventional case where the superconducting dome surrounds an antiferromagnetic quantum critical point. Sponsored by NSERC RGPIN-2014-04584, CIFAR, Research Chair in the Theory of Quantum Materials.
NASA Astrophysics Data System (ADS)
Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.
2016-05-01
We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.
Detecting Noisy Events Using Waveform Cross-Correlation at Superarrays of Seismic Stations
NASA Astrophysics Data System (ADS)
von Seggern, D. H.; Tibuleac, I. M.
2007-12-01
Cross-correlation using master events, followed by stacking of the correlation series, has been shown to dramatically improve detection thresholds of small-to-medium seismic arrays. With the goal of lowering the detection threshold, determining relative magnitudes or moments, and characterizing sources by empirical Green's functions, we extend the cross-correlation methodology to include "superarrays" of seismic stations. The superarray concept naturally brings further benefits over conventional arrays and single-stations due to the fact that many distances and azimuths can be sampled. This extension is straightforward given the ease with which regional or global data from various stations or arrays can be currently accessed and combined into a single database. We demonstrate the capability of superarrays to detect and analyze events which lie below the detection threshold. This is aided by applying an F-statistic detector to the superarray cross-correlation stack and its components. Our first example illustrates the use of a superarray consisting of the Southern Great Basin Digital Seismic Network, a small-aperture array (NVAR) in Mina, Nevada and the Earthscope Transportable Array to detect events in California-Nevada areas. In our second example, we use a combination of small-to-medium arrays and single stations to study the rupture of the great Sumatra earthquake of 26 December 2004 and to detect its early aftershocks. The location and times of "detected" events are confirmed using a frequency- wavenumber method at the small-to-medium arrays. We propose that ad hoc superarrays can be used in many studies where conventional approaches previously used only single arrays or groups of single stations. The availability of near-real-time data from many networks and of archived data from, for instance, IRIS makes possible the easy assembly of superarrays. Furthermore, the continued improvement of seismic data availability and the continued growth in the number of world-wide seismic sensors will increasingly make superarrays an attractive choice for many studies.
A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi
2014-01-01
Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862
Shim, Sung Ryul; Sun, Hwa Yeon; Ko, Young Myoung; Chun, Dong-Il; Yang, Won Jae
2014-01-01
Background Smartphone-based assessment may be a useful diagnostic and monitoring tool for patients. There have been many attempts to create a smartphone diagnostic tool for clinical use in various medical fields but few have demonstrated scientific validity. Objective The purpose of this study was to develop a smartphone application of the International Prostate Symptom Score (IPSS) and to demonstrate its validity and reliability. Methods From June 2012 to May 2013, a total of 1581 male participants (≥40 years old), with or without lower urinary tract symptoms (LUTS), visited our urology clinic via the health improvement center at Soonchunhyang University Hospital (Republic of Korea) and were enrolled in this study. A randomized repeated measures crossover design was employed using a smartphone application of the IPSS and the conventional paper form of the IPSS. Paired t test under a hypothesis of non-inferior trial was conducted. For the reliability test, the intraclass correlation coefficient (ICC) was measured. Results The total score of the IPSS (P=.289) and each item of the IPSS (P=.157-1.000) showed no differences between the paper version and the smartphone version of the IPSS. The mild, moderate, and severe LUTS groups showed no differences between the two versions of the IPSS. A significant correlation was noted in the total group (ICC=.935, P<.001). The mild, moderate, and severe LUTS groups also showed significant correlations (ICC=.616, .549, and .548 respectively, all P<.001).There was selection bias in this study, as only participants who had smartphones could participate. Conclusions The validity and reliability of the smartphone application version were comparable to the conventional paper version of the IPSS. The smartphone application of the IPSS could be an effective method for measuring lower urinary tract symptoms. PMID:24513507
Conjugated polymer energy level shifts in lithium-ion battery electrolytes.
Song, Charles Kiseok; Eckstein, Brian J; Tam, Teck Lip Dexter; Trahey, Lynn; Marks, Tobin J
2014-11-12
The ionization potentials (IPs) and electron affinities (EAs) of widely used conjugated polymers are evaluated by cyclic voltammetry (CV) in conventional electrochemical and lithium-ion battery media, and also by ultraviolet photoelectron spectroscopy (UPS) in vacuo. By comparing the data obtained in the different systems, it is found that the IPs of the conjugated polymer films determined by conventional CV (IPC) can be correlated with UPS-measured HOMO energy levels (EH,UPS) by the relationship EH,UPS = (1.14 ± 0.23) × qIPC + (4.62 ± 0.10) eV, where q is the electron charge. It is also found that the EAs of the conjugated polymer films measured via CV in conventional (EAC) and Li(+) battery (EAB) media can be linearly correlated by the relationship EAB = (1.07 ± 0.13) × EAC + (2.84 ± 0.22) V. The slopes and intercepts of these equations can be correlated with the dielectric constants of the polymer film environments and the redox potentials of the reference electrodes, as modified by the surrounding electrolyte, respectively.
Speksnijder, L; Rousian, M; Steegers, E A P; Van Der Spek, P J; Koning, A H J; Steensma, A B
2012-07-01
Virtual reality is a novel method of visualizing ultrasound data with the perception of depth and offers possibilities for measuring non-planar structures. The levator ani hiatus has both convex and concave aspects. The aim of this study was to compare levator ani hiatus volume measurements obtained with conventional three-dimensional (3D) ultrasound and with a virtual reality measurement technique and to establish their reliability and agreement. 100 symptomatic patients visiting a tertiary pelvic floor clinic with a normal intact levator ani muscle diagnosed on translabial ultrasound were selected. Datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm at the level of minimal hiatal dimensions during contraction. The levator area (in cm(2)) was measured and multiplied by 1.5 to get the levator ani hiatus volume in conventional 3D ultrasound (in cm(3)). Levator ani hiatus volume measurements were then measured semi-automatically in virtual reality (cm(3) ) using a segmentation algorithm. An intra- and interobserver analysis of reliability and agreement was performed in 20 randomly chosen patients. The mean difference between levator ani hiatus volume measurements performed using conventional 3D ultrasound and virtual reality was 0.10 (95% CI, - 0.15 to 0.35) cm(3). The intraclass correlation coefficient (ICC) comparing conventional 3D ultrasound with virtual reality measurements was > 0.96. Intra- and interobserver ICCs for conventional 3D ultrasound measurements were > 0.94 and for virtual reality measurements were > 0.97, indicating good reliability for both. Levator ani hiatus volume measurements performed using virtual reality were reliable and the results were similar to those obtained with conventional 3D ultrasonography. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.
Zaritsky, Joshua; Rastogi, Anjay; Fischmann, George; Yan, Jieshi; Kleinman, Kenneth; Chow, Georgina; Gales, Barbara; Salusky, Isidro B.; Wesseling-Perry, Katherine
2014-01-01
Background The utilization of short-term daily hemodialysis has increased over the last few years, but little is known on its effects on the control of serum phosphate and fibroblast growth factor 23 (FGF23) levels. Methods We therefore performed a cross-sectional study to compare FGF23 levels as well as other biochemical variables between 24 patients undergoing short daily hemodialysis using the NxStage System® and 54 patients treated with conventional in-center hemodialysis. FGF23 levels were measured using the second-generation Immutopics® C-terminal assay. Results Short daily hemodialysis patients were younger than patients on conventional hemodialysis but there were no differences between groups in the duration of end-stage renal disease nor in the number of patients with residual renal function. A greater number of short daily hemodialysis patients received vitamin D sterol therapy than did conventional in-center hemodialysis patients while there were no differences in the use of different phosphate binders and calcimimetic therapy between groups. Overall serum calcium, phosphorus and intact parathyroid hormone levels were similar between groups. While serum phosphorus levels correlated with FGF23 concentrations in each group separately [r = 0.522 (P < 0.01) and r = 0.42 (P < 0.01) in short daily and conventional in-center hemodialysis, respectively], FGF23 levels were lower [823 RU/mL (263, 2169)] in the patients receiving short daily hemodialysis than in patients treated with conventional hemodialysis [2521 RU/mL (909, 5556)] (P < 0.01 between groups). Conclusions These findings demonstrate that FGF23 levels are significantly lower in short daily hemodialysis patients and suggest that FGF23 levels may be a more sensitive biomarker of cumulative phosphate burden than single or multiple serum phosphorus determinations in patients treated with hemodialysis. PMID:24009282
Jensen, Mallory A.; LaSalvia, Vincenzo; Morishige, Ashley E.; ...
2016-08-01
The capital expense (capex) of conventional crystal growth methods is a barrier to sustainable growth of the photovoltaic industry. It is challenging for innovative techniques to displace conventional growth methods due the low dislocation density and high lifetime required for high efficiency devices. One promising innovation in crystal growth is the noncontact crucible method (NOC-Si), which combines aspects of Czochralski (Cz) and conventional casting. This material has the potential to satisfy the dual requirements, with capex likely between that of Cz (high capex) and multicrystalline silicon (mc-Si, low capex). In this contribution, we observe a strong dependence of solar cellmore » efficiency on ingot height, correlated with the evolution of swirl-like defects, for single crystalline n-type silicon grown by the NOC-Si method. We posit that these defects are similar to those observed in Cz, and we explore the response of NOC-Si to high temperature treatments including phosphorous diffusion gettering (PDG) and Tabula Rasa (TR). The highest lifetimes (2033 us for the top of the ingot and 342 us for the bottom of the ingot) are achieved for TR followed by a PDG process comprising a standard plateau and a low temperature anneal. Further improvements can be gained by tailoring the time-temperature profiles of each process. Lifetime analysis after the PDG process indicates the presence of a getterable impurity in the as-grown material, while analysis after TR points to the presence of oxide precipitates especially at the bottom of the ingot. Uniform lifetime degradation is observed after TR which we assign to a presently unknown defect. Lastly, future work includes additional TR processing to uncover the nature of this defect, microstructural characterization of suspected oxide precipitates, and optimization of the TR process to achieve the dual goals of high lifetime and spatial homogenization.« less
Conductive polymer foam surface improves the performance of a capacitive EEG electrode.
Baek, Hyun Jae; Lee, Hong Ji; Lim, Yong Gyu; Park, Kwang Suk
2012-12-01
In this paper, a new conductive polymer foam-surfaced electrode was proposed for use as a capacitive EEG electrode for nonintrusive EEG measurements in out-of-hospital environments. The current capacitive electrode has a rigid surface that produces an undefined contact area due to its stiffness, which renders it unable to conform to head curvature and locally isolates hairs between the electrode surface and scalp skin, making EEG measurement through hair difficult. In order to overcome this issue, a conductive polymer foam was applied to the capacitive electrode surface to provide a cushioning effect. This enabled EEG measurement through hair without any conductive contact with bare scalp skin. Experimental results showed that the new electrode provided lower electrode-skin impedance and higher voltage gains, signal-to-noise ratios, signal-to-error ratios, and correlation coefficients between EEGs measured by capacitive and conventional resistive methods compared to a conventional capacitive electrode. In addition, the new electrode could measure EEG signals, while the conventional capacitive electrode could not. We expect that the new electrode presented here can be easily installed in a hat or helmet to create a nonintrusive wearable EEG apparatus that does not make users look strange for real-world EEG applications.
Experimental investigation of correlation between fading and glint for aircraft targets
NASA Astrophysics Data System (ADS)
Wallin, C. M.; Aas, B.
The correlation between the fading and glint of aircraft targets is investigated experimentally using a conventional amplitude comparison three-channel monopulse radar operating in the Ku-band. A significant correlation is found between the RCS and the variance of the angle error signals; this correlation seems to be independent of the aspect angle. The correlation between the RCS and the angle error signals themselves, however, is found to be very small.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan
2014-01-01
Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823
Use of two conventional staining methods to assess the acrosomal status of stallion spermatozoa.
Runcan, E E; Pozor, M A; Zambrano, G L; Benson, S; Macpherson, M L
2014-07-01
The acrosome is a highly specialised region of the spermatozoon that is essential for fertilisation. Defects or dysfunction of this structure have been associated with fertility problems in man and various domestic species including stallions. Current methods of evaluating the acrosome of stallion spermatozoa are time consuming and require specialised equipment, which is cost prohibitive to the average practitioner. To evaluate 2 conventional stains (Dip Quick and Spermac) and determine their usefulness in assessing acrosome integrity in stallions as compared with specific acrosomal labelling with a fluorescein-conjugated lectin - a method that has been validated for acrosome status evaluation in stallions. In vivo experimental design. Semen from 6 mature Miniature horse stallions of known fertility was collected on 5 separate occasions. To increase the number of reacted acrosomes, portions of each ejaculate were incubated with the calcium ionophore, A23187. Ejaculates were divided and semen samples were processed according to recommendations for fluorescein-conjugated peanut lectin, Pisum sativum agglutin, Dip Quick, and Spermac staining methods. Slides were evaluated independently by 2 separate investigators. Spermatozoa were classified as having intact, reacting, reacted or defective acrosomes. All parameters obtained by both investigators, using all 3 staining methods were highly correlated (P<0.001). There was no statistical difference (P>0.05) between investigators or staining method for the percentages of intact or reacted acrosomes. However, there was a significant difference between investigators and staining methods for determining reacting acrosome percentages (P<0.05). Dip Quick and Spermac stains are useful for determining intact vs. reacted acrosomes for stallion spermatozoa. © 2013 EVJ Ltd.
Faster modified protocol for first order reversal curve measurements
NASA Astrophysics Data System (ADS)
De Biasi, Emilio
2017-10-01
In this work we present a faster modified protocol for first order reversal curve (FORC) measurements. The main idea of this procedure is to use the information of the ascending and descending branches constructed through successive sweeps of magnetic field. The new method reduces the number of field sweeps to almost one half as compared to the traditional method. The length of each branch is reduced faster than in the usual FORC protocol. The new method implies not only a new measurement protocol but also a new recipe for the previous treatment of the data. After of these pre-processing, the FORC diagram can be obtained by the conventional methods. In the present work we show that the new FORC procedure leads to results identical to the conventional method if the system under study follows the Stoner-Wohlfarth model with interactions that do not depend of the magnetic state (up or down) of the entities, as in the Preisach model. More specifically, if the coercive and interactions fields are not correlated, and the hysteresis loops have a square shape. Some numerical examples show the comparison between the usual FORC procedure and the propose one. We also discuss that it is possible to find some differences in the case of real systems, due to the magnetic interactions. There is no reason to prefer one FORC method over the other from the point of view of the information to be obtained. On the contrary, the use of both methods could open doors for a more accurate and deep analysis.
Pugazhendhi, Sugandhi; Dorairaj, Arvind Prasanth
Diabetic patients are more prone to the development of foot ulcers, because their underlying tissues are exposed to colonization by various pathogenic organisms. Hence, biofilm formation plays a vital role in disease progression by antibiotic resistance to the pathogen found in foot infections. The present study has demonstrated the correlation of biofilm assay with the clinical characteristics of diabetic foot infection. The clinical characteristics such as the ulcer duration, size, nature, and grade were associated with biofilm production. Our results suggest that as the size of the ulcer with poor glycemic control increased, the organism was more likely to be positive for biofilm formation. A high-degree of antibiotic resistance was exhibited by the biofilm-producing gram-positive isolates for erythromycin and gram-negative isolates for cefpodoxime. Comparisons of biofilm production using 3 different conventional methods were performed. The strong producers with the tube adherence method were able to produce biofilm using the cover slip assay method, and the weak producers in tube adherence method had difficulty in producing biofilm using the other 2 methods, indicating that the tube adherence method is the best method for assessing biofilm formation. The strong production of biofilm with the conventional method was further confirmed by scanning electron microscopy analysis, because bacteria attached as a distinct layer of biofilm. Thus, the high degree of antibiotic resistance was exhibited by biofilm producers compared with nonbiofilm producers. The tube adherence and cover slip assay were found to be the better method for biofilm evaluation. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang
2018-03-01
Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Gupta, Tulika; Rajeshkumar, Thayalan; Rajaraman, Gopalan
2014-07-28
Density functional studies have been performed on ten different {Gd(III)-radical} complexes exhibiting both ferro and antiferromagnetic exchange interaction with an aim to assess a suitable exchange-correlation functional within DFT formalism. This study has also been extended to probe the mechanism of magnetic coupling and to develop suitable magneto-structural correlations for this pair. Our method assessments reveal the following order of increasing accuracy for the evaluation of J values compared to experimental coupling constants: B(40HF)LYP < BHandHLYP < TPSSH < PW91 < PBE < BP86 < OLYP < BLYP < PBE0 < X3LYP < B3LYP < B2PLYP. Grimme's double-hybrid functional is found to be superior compared to other functionals tested and this is followed very closely by the conventional hybrid B3LYP functional. At the basis set front, our calculations reveal that the incorporation of relativistic effect is important in these calculations and the relativistically corrected effective core potential (ECP) basis set is found to yield better Js compared to other methods. The supposedly empty 5d/6s/6p orbitals of Gd(III) are found to play an important role in the mechanism of magnetic coupling and different contributions to the exchange terms are probed using Molecular Orbital (MO) and Natural Bond Orbital (NBO) analysis. Magneto-structural correlations for Gd-O distances, Gd-O-N angles and Gd-O-N-C dihedral angles are developed where the bond angles as well as dihedral angle parameters are found to dictate the sign and strength of the magnetic coupling in this series.
Sysa-Shah, Polina; Sørensen, Lars L; Abraham, M Roselle; Gabrielson, Kathleen L
2015-01-01
Electrocardiography is an important method for evaluation and risk stratification of patients with cardiac hypertrophy. We hypothesized that the recently developed transgenic mouse model of cardiac hypertrophy (ErbB2tg) will display distinct ECG features, enabling WT (wild type) mice to be distinguished from transgenic mice without using conventional PCR genotyping. We evaluated more than 2000 mice and developed specific criteria for genotype determination by using cageside ECG, during which unanesthetized mice were manually restrained for less than 1 min. Compared with those from WT counterparts, the ECG recordings of ErbB2tg mice were characterized by higher P- and R-wave amplitudes, broader QRS complexes, inverted T waves, and ST interval depression. Pearson's correlation matrix analysis of combined WT and ErbB2tg data revealed significant correlation between heart weight and the ECG parameters of QT interval (corrected for heart rate), QRS interval, ST height, R amplitude, P amplitude, and PR interval. In addition, the left ventricular posterior wall thickness as determined by echocardiography correlated with ECG-determined ST height, R amplitude, QRS interval; echocardiographic left ventricular mass correlated with ECG-determined ST height and PR interval. In summary, we have determined phenotypic ECG criteria to differentiate ErbB2tg from WT genotypes in 98.8% of mice. This inexpensive and time-efficient ECG-based phenotypic method might be applied to differentiate between genotypes in other rodent models of cardiac hypertrophy. Furthermore, with appropriate modifications, this method might be translated for use in other species. PMID:26310459
Das, Shubhagata; Sarker, Subir; Ghorashi, Seyed Ali; Forwood, Jade K; Raidal, Shane R
2016-11-01
Beak and feather disease virus (BFDV) threatens a wide range of endangered psittacine birds worldwide. In this study, we assessed a novel PCR assay and genetic screening method using high-resolution melt (HRM) curve analysis for BFDV targeting the capsid (Cap) gene (HRM-Cap) alongside conventional PCR detection as well as a PCR method that targets a much smaller fragment of the virus genome in the replicase initiator protein (Rep) gene (HRM-Rep). Limits of detection, sensitivity, specificity and discriminatory power for differentiating BFDV sequences were compared. HRM-Cap had a high positive predictive value and could readily differentiate between a reference genotype and 17 other diverse BFDV genomes with more discriminatory power (genotype confidence percentage) than HRM-Rep. Melt curve profiles generated by HRM-Cap correlated with unique DNA sequence profiles for each individual test genome. The limit of detection of HRM-Cap was lower (2×10 -5 ng/reaction or 48 viral copies) than that for both HRM-Rep and conventional BFDV PCR which had similar sensitivity (2×10 -6 ng or 13 viral copies/reaction). However, when used in a diagnostic setting with 348 clinical samples there was strong agreement between HRM-Cap and conventional PCR (kappa=0.87, P<0.01, 98% specificity) and HRM-Cap demonstrated higher specificity (99.9%) than HRM-Rep (80.3%). Copyright © 2016 Elsevier B.V. All rights reserved.