Sample records for computing highly correlated

  1. The 512-channel correlator controller

    NASA Technical Reports Server (NTRS)

    Brokl, S. S.

    1976-01-01

    A high-speed correlator for radio and radar observations was developed and a controller was designed so that the correlator could run automatically without computer intervention. The correlator controller assumes the role of bus master and keeps track of data and properly interrupts the computer at the end of the observation.

  2. Experimental quantum computing without entanglement.

    PubMed

    Lanyon, B P; Barbieri, M; Almeida, M P; White, A G

    2008-11-14

    Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.

  3. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  4. The association between computer literacy and training on clinical productivity and user satisfaction in using the electronic medical record in Saudi Arabia.

    PubMed

    Alasmary, May; El Metwally, Ashraf; Househ, Mowafa

    2014-08-01

    The association of computer literacy, training on clinical productivity and satisfaction of a recently implemented Electronic Medical Record (EMR) system in Prince Sultan Medical Military City ((PSMMC)) was investigated. The scope of this study was to explore the association between age, occupation and computer literacy and clinical productivity and users' satisfaction of the newly implemented EMR at PSMMC as well as the association of user satisfaction with age and position. A self-administrated questionnaire was distributed to all doctors and nurses working in Alwazarat Family and Community Center (a Health center in PSMMC). A convenience sample size of 112 healthcare providers (65 Nurses and 47 physicians) completed the questionnaire. A combination of correlation, One Way ANOVA and t-tests were used to answer the research questions. Participants had high levels of self-reported literacy on computers and satisfaction of the system. Both levels were higher among physicians than among nurses. A moderate but significant (at p < 0.01 level) correlation was found between computer literacy and users' satisfaction towards the system (R = 0.343). Age was weakly, but significantly (at p < 0.05), positively correlated with satisfaction with the system (R = 0.29). Self-reported system productivity and satisfaction was statistically correlated at p < 0.01 (R = 0.509). High level of satisfaction with training on using the system was not positively correlated with overall satisfaction of using the system. This study demonstrated that EMR users with high computer literacy skills were more satisfied with using the EMR than users with low computer literacy skills.

  5. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  6. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  7. Student Perceived Importance and Correlations of Selected Computer Literacy Course Topics

    ERIC Educational Resources Information Center

    Ciampa, Mark

    2013-01-01

    Traditional college-level courses designed to teach computer literacy are in a state of flux. Today's students have high rates of access to computing technology and computer ownership, leading many policy decision makers to conclude that students already are computer literate and thus computer literacy courses are dinosaurs in a modern digital…

  8. High Throughput Computing Impact on Meta Genomics (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Gore, Brooklin

    2018-02-01

    This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.

  9. Analog Correlator Based on One Bit Digital Correlator

    NASA Technical Reports Server (NTRS)

    Prokop, Norman (Inventor); Krasowski, Michael (Inventor)

    2017-01-01

    A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.

  10. High-speed technique based on a parallel projection correlation procedure for digital image correlation

    NASA Astrophysics Data System (ADS)

    Zaripov, D. I.; Renfu, Li

    2018-05-01

    The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.

  11. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    ERIC Educational Resources Information Center

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  12. Television, reading, and computer time: correlates of school-day leisure-time sedentary behavior and relationship with overweight in children in the U.S.

    PubMed

    Sisson, Susan B; Broyles, Stephanie T; Baker, Birgitta L; Katzmarzyk, Peter T

    2011-09-01

    The purposes were 1) to determine if different leisure-time sedentary behaviors (LTSB), such as TV/video/video game viewing/playing (TV), reading for pleasure (reading), and nonschool computer usage, were associated with childhood overweight status, and 2) to assess the social-ecological correlates of LTSB. The analytic sample was 33,117 (16,952 boys and 16,165 girls) participants from the 2003 National Survey of Children's Health. The cut-point for excessive TV and nonschool computer usage was ≥ 2 hr/day. High quantities of daily reading for pleasure were classified as ≥ 31 min/day. Weighted descriptive characteristics were calculated on the sample (means ± SE or frequency). Logistic regression models were used to determine if the LTSB were associated with overweight status and to examine social-ecological correlates. Over 35% of the sample was overweight. Odds of being overweight were higher in the 2 to 3 hr/day (OR: 1.48, 95% CI: 1.24, 1.76) and ≥ 4 hr/day (OR: 1.52, 95% CI: 1.22, 1.91) daily TV groups compared with none. Reading and nonschool computer usage was not associated with being overweight. TV was associated with overweight classification; however, nonschool computer usage and reading were not. Several individual, family, and community correlates were associated with high volumes of daily TV viewing.

  13. Diffraction Correlation to Reconstruct Highly Strained Particles

    NASA Astrophysics Data System (ADS)

    Brown, Douglas; Harder, Ross; Clark, Jesse; Kim, J. W.; Kiefer, Boris; Fullerton, Eric; Shpyrko, Oleg; Fohtung, Edwin

    2015-03-01

    Through the use of coherent x-ray diffraction a three-dimensional diffraction pattern of a highly strained nano-crystal can be recorded in reciprocal space by a detector. Only the intensities are recorded, resulting in a loss of the complex phase. The recorded diffraction pattern therefore requires computational processing to reconstruct the density and complex distribution of the diffracted nano-crystal. For highly strained crystals, standard methods using HIO and ER algorithms are no longer sufficient to reconstruct the diffraction pattern. Our solution is to correlate the symmetry in reciprocal space to generate an a priori shape constraint to guide the computational reconstruction of the diffraction pattern. This approach has improved the ability to accurately reconstruct highly strained nano-crystals.

  14. Comparison of one-particle basis set extrapolation to explicitly correlated methods for the calculation of accurate quartic force fields, vibrational frequencies, and spectroscopic constants: Application to H2O, N2H+, NO2+, and C2H2

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Valeev, Edward F.; Lee, Timothy J.

    2010-12-01

    One-particle basis set extrapolation is compared with one of the new R12 methods for computing highly accurate quartic force fields (QFFs) and spectroscopic data, including molecular structures, rotational constants, and vibrational frequencies for the H2O, N2H+, NO2+, and C2H2 molecules. In general, agreement between the spectroscopic data computed from the best R12 and basis set extrapolation methods is very good with the exception of a few parameters for N2H+ where it is concluded that basis set extrapolation is still preferred. The differences for H2O and NO2+ are small and it is concluded that the QFFs from both approaches are more or less equivalent in accuracy. For C2H2, however, a known one-particle basis set deficiency for C-C multiple bonds significantly degrades the quality of results obtained from basis set extrapolation and in this case the R12 approach is clearly preferred over one-particle basis set extrapolation. The R12 approach used in the present study was modified in order to obtain high precision electronic energies, which are needed when computing a QFF. We also investigated including core-correlation explicitly in the R12 calculations, but conclude that current approaches are lacking. Hence core-correlation is computed as a correction using conventional methods. Considering the results for all four molecules, it is concluded that R12 methods will soon replace basis set extrapolation approaches for high accuracy electronic structure applications such as computing QFFs and spectroscopic data for comparison to high-resolution laboratory or astronomical observations, provided one uses a robust R12 method as we have done here. The specific R12 method used in the present study, CCSD(T)R12, incorporated a reformulation of one intermediate matrix in order to attain machine precision in the electronic energies. Final QFFs for N2H+ and NO2+ were computed, including basis set extrapolation, core-correlation, scalar relativity, and higher-order correlation and then used to compute highly accurate spectroscopic data for all isotopologues. Agreement with high-resolution experiment for 14N2H+ and 14N2D+ was excellent, but for 14N16O2+ agreement for the two stretching fundamentals is outside the expected residual uncertainty in the theoretical values, and it is concluded that there is an error in the experimental quantities. It is hoped that the highly accurate spectroscopic data presented for the minor isotopologues of N2H+ and NO2+ will be useful in the interpretation of future laboratory or astronomical observations.

  15. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  16. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.

    PubMed

    Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E

    2017-02-01

    Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.

  17. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  18. High-Performance Computational Analysis of Glioblastoma Pathology Images with Database Support Identifies Molecular and Survival Correlates.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel

    2013-12-01

    In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.

  19. Combination of inquiry learning model and computer simulation to improve mastery concept and the correlation with critical thinking skills (CTS)

    NASA Astrophysics Data System (ADS)

    Nugraha, Muhamad Gina; Kaniawati, Ida; Rusdiana, Dadi; Kirana, Kartika Hajar

    2016-02-01

    Among the purposes of physics learning at high school is to master the physics concepts and cultivate scientific attitude (including critical attitude), develop inductive and deductive reasoning skills. According to Ennis et al., inductive and deductive reasoning skills are part of critical thinking. Based on preliminary studies, both of the competence are lack achieved, it is seen from student learning outcomes is low and learning processes that are not conducive to cultivate critical thinking (teacher-centered learning). One of learning model that predicted can increase mastery concepts and train CTS is inquiry learning model aided computer simulations. In this model, students were given the opportunity to be actively involved in the experiment and also get a good explanation with the computer simulations. From research with randomized control group pretest-posttest design, we found that the inquiry learning model aided computer simulations can significantly improve students' mastery concepts than the conventional (teacher-centered) method. With inquiry learning model aided computer simulations, 20% of students have high CTS, 63.3% were medium and 16.7% were low. CTS greatly contribute to the students' mastery concept with a correlation coefficient of 0.697 and quite contribute to the enhancement mastery concept with a correlation coefficient of 0.603.

  20. Probabilistic power flow using improved Monte Carlo simulation method with correlated wind sources

    NASA Astrophysics Data System (ADS)

    Bie, Pei; Zhang, Buhan; Li, Hang; Deng, Weisi; Wu, Jiasi

    2017-01-01

    Probabilistic Power Flow (PPF) is a very useful tool for power system steady-state analysis. However, the correlation among different random injection power (like wind power) brings great difficulties to calculate PPF. Monte Carlo simulation (MCS) and analytical methods are two commonly used methods to solve PPF. MCS has high accuracy but is very time consuming. Analytical method like cumulants method (CM) has high computing efficiency but the cumulants calculating is not convenient when wind power output does not obey any typical distribution, especially when correlated wind sources are considered. In this paper, an Improved Monte Carlo simulation method (IMCS) is proposed. The joint empirical distribution is applied to model different wind power output. This method combines the advantages of both MCS and analytical method. It not only has high computing efficiency, but also can provide solutions with enough accuracy, which is very suitable for on-line analysis.

  1. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  2. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  3. Linear-array based full-view high-resolution photoacoustic computed tomography of whole mouse brain functions in vivo

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Pengfei; Wang, Lihong V.

    2018-02-01

    Photoacoustic computed tomography (PACT) is a non-invasive imaging technique offering high contrast, high resolution, and deep penetration in biological tissues. We report a photoacoustic computed tomography (PACT) system equipped with a high frequency linear array for anatomical and functional imaging of the mouse whole brain. The linear array was rotationally scanned in the coronal plane to achieve the full-view coverage. We investigated spontaneous neural activities in the deep brain by monitoring the hemodynamics and observed strong interhemispherical correlations between contralateral regions, both in the cortical layer and in the deep regions.

  4. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  5. Is computer-aided interpretation of 99Tcm-HMPAO leukocyte scans better than the naked eye?

    PubMed

    Almer, S; Peters, A M; Ekberg, S; Franzén, L; Granerus, G; Ström, M

    1995-04-01

    In order to compare visual interpretation of inflammation detected by leukocyte scintigraphy with that of different computer-aided quantification methods, 34 patients (25 with ulcerative colitis and 9 with endoscopically verified non-inflamed colonic mucosa), were investigated using 99Tcm-hexamethylpropyleneamine oxime (99Tcm-HMPAO) leukocyte scintigraphy and colonoscopy with biopsies. Scintigrams were obtained 45 min and 4 h after the injection of labelled cells. Computer-generated grading of seven colon segments using four different methods was performed on each scintigram for each patient. The same segments were graded independently using a 4-point visual scale. Endoscopic and histological inflammation were scored on 4-point scales. At 45 min, a positive correlation was found between endoscopic and scan gradings in individual colon segments when using visual grading and three of the four computer-aided methods (Spearman's rs = 0.30-0.64, P < 0.001). Histological grading correlated with visual grading and with two of the four computer-aided methods at 45 min (rs = 0.42-0.54, P < 0.001). At 4 h, all grading methods correlated positively with both endoscopic and histological assessment. The correlation coefficients were, in all but one instance, highest for the visual grading. As an inter-observer comparison to assess agreement between the visual gradings of two nuclear physicians, 14 additional patients (9 ulcerative colitis, 5 infectious enterocolitis) underwent leukocyte scintigraphy. Agreement assessed using kappa statistics was 0.54 at 45 min (P < 0.001). Separate data concerning the presence/absence of active inflammation showed a high kappa value (0.74, P < 0.001). Our results showed that a simple scintigraphic scoring system based on assessment using the human eye reflects colonic inflammation at least as well as computer-aided grading, and that highly correlated results can be achieved between different investigators.

  6. Correlation of current drop, filling gas pressure, and ion beam emission in a low energy Mather-type plasma focus device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behbahani, R. A.; Aghamir, F. M.

    The behavior of current drop and its correlation with ion beam emission during the radial phase of a high inductance low energy Mather type plasma focus device have been studied. The study includes two ranges of filling gas pressure, namely the low range of 0.2-0.8 mbar and the high range of 0.8-1.5 mbar. Two different current simulation processes were performed to aid the interpretation of the experimental results. Within the low range of operating pressure, an acceptable match between the computed and experimental current signals was achieved when the effects of anomalous resistances were contemplated. While in the high rangemore » of pressure, the computed and experimental current traces were in line even without considering the effects of anomalous resistances. The analysis shows that by decreasing the filling gas pressure the effects of instabilities are intensified. The computed and experimental current traces, along with ion beam signals gathered from a faraday cup, show that there is a strong correlation between the intensity of ion beam and its duration with the current drop during the radial phase.« less

  7. Charging Ahead into the Next Millennium: Proceedings of the Systems and Technology Symposium (20th) Held in Denver, Colorado on 7-10 June 1999

    DTIC Science & Technology

    1999-06-01

    Tactical Radar Correlator EV Electric Vehicle EW Electronic Warfare F ^^m F Frequency FA False Alarm FAO Foreign Area Officer FBE Fleet Battle... Electric Vehicle High Frequency Horsepower High-Performance Computing High Performance Computing and Communications High Performance Knowledge...A/D Analog-to-Digital A/G Air-to-Ground AAN Army After Next AAV Advanced Air Vehicle ABCCC Airborne Battlefield Command, Control and

  8. Data-Driven Correlation Analysis Between Observed 3D Fatigue-Crack Path and Computed Fields from High-Fidelity, Crystal-Plasticity, Finite-Element Simulations

    NASA Astrophysics Data System (ADS)

    Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.

    2018-05-01

    Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).

  9. An Analysis of Cryptographically Significant Boolean Functions With High Correlation Immunity by Reconfigurable Computer

    DTIC Science & Technology

    2010-12-01

    with high correlation immunity and then evaluate these functions for other desirable cryptographic features. C. METHOD The only known primary methods...out if not used) # ---------------------------------- # PRIMARY = < primary file 1> < primary file 2> #SECONDARY = <secondary file 1...finding the fuction value for a //set u and for each value of v. end end

  10. Intricacies of modern supercomputing illustrated with recent advances in simulations of strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Schulthess, Thomas C.

    2013-03-01

    The continued thousand-fold improvement in sustained application performance per decade on modern supercomputers keeps opening new opportunities for scientific simulations. But supercomputers have become very complex machines, built with thousands or tens of thousands of complex nodes consisting of multiple CPU cores or, most recently, a combination of CPU and GPU processors. Efficient simulations on such high-end computing systems require tailored algorithms that optimally map numerical methods to particular architectures. These intricacies will be illustrated with simulations of strongly correlated electron systems, where the development of quantum cluster methods, Monte Carlo techniques, as well as their optimal implementation by means of algorithms with improved data locality and high arithmetic density have gone hand in hand with evolving computer architectures. The present work would not have been possible without continued access to computing resources at the National Center for Computational Science of Oak Ridge National Laboratory, which is funded by the Facilities Division of the Office of Advanced Scientific Computing Research, and the Swiss National Supercomputing Center (CSCS) that is funded by ETH Zurich.

  11. Controller and interface module for the High-Speed Data Acquisition System correlator/accumulator

    NASA Technical Reports Server (NTRS)

    Brokl, S. S.

    1985-01-01

    One complex channel of the High-Speed Data Acquisition System (a subsystem used in the Goldstone solar system radar), consisting of two correlator modules and one accumulator module, is operated by the controller and interface module interfaces are provided to the VAX UNIBUS for computer control, monitor, and test of the controller and correlator/accumulator. The correlator and accumulator modules controlled by this module are the key digital signal processing elements of the Goldstone High-Speed Data Acquisition System. This fully programmable unit provides for a wide variety of correlation and filtering functions operating on a three megaword/second data flow. Data flow is to the VAX by way of the I/O port of a FPS 5210 array processor.

  12. ESTCP Pilot Project Wide Area Assessment for Munitions Response

    DTIC Science & Technology

    2008-07-01

    Data A broadband normalized difference vegetation index ( NDVI ) was computed from the high- resolution spectral data to provide a detection of canopy...chlorophyll content. The NDVI strongly correlates with the green yucca, cactus, juniper, and other SAR-responsive vegetation species on the site...Vegetation Index. NDVI is broadband normalized difference vegetation index computed from high resolution spectral data using (RED-NIR) / (RED +NIR) to

  13. A study of reacting free and ducted hydrogen/air jets

    NASA Technical Reports Server (NTRS)

    Beach, H. L., Jr.

    1975-01-01

    The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.

  14. FPGA design of correlation-based pattern recognition

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.

  15. Ridge: a computer program for calculating ridge regression estimates

    Treesearch

    Donald E. Hilt; Donald W. Seegrist

    1977-01-01

    Least-squares coefficients for multiple-regression models may be unstable when the independent variables are highly correlated. Ridge regression is a biased estimation procedure that produces stable estimates of the coefficients. Ridge regression is discussed, and a computer program for calculating the ridge coefficients is presented.

  16. A novel approach for discovering condition-specific correlations of gene expressions within biological pathways by using cloud computing technology.

    PubMed

    Chang, Tzu-Hao; Wu, Shih-Lin; Wang, Wei-Jen; Horng, Jorng-Tzong; Chang, Cheng-Wei

    2014-01-01

    Microarrays are widely used to assess gene expressions. Most microarray studies focus primarily on identifying differential gene expressions between conditions (e.g., cancer versus normal cells), for discovering the major factors that cause diseases. Because previous studies have not identified the correlations of differential gene expression between conditions, crucial but abnormal regulations that cause diseases might have been disregarded. This paper proposes an approach for discovering the condition-specific correlations of gene expressions within biological pathways. Because analyzing gene expression correlations is time consuming, an Apache Hadoop cloud computing platform was implemented. Three microarray data sets of breast cancer were collected from the Gene Expression Omnibus, and pathway information from the Kyoto Encyclopedia of Genes and Genomes was applied for discovering meaningful biological correlations. The results showed that adopting the Hadoop platform considerably decreased the computation time. Several correlations of differential gene expressions were discovered between the relapse and nonrelapse breast cancer samples, and most of them were involved in cancer regulation and cancer-related pathways. The results showed that breast cancer recurrence might be highly associated with the abnormal regulations of these gene pairs, rather than with their individual expression levels. The proposed method was computationally efficient and reliable, and stable results were obtained when different data sets were used. The proposed method is effective in identifying meaningful biological regulation patterns between conditions.

  17. On the correlation between motion data captured from low-cost gaming controllers and high precision encoders.

    PubMed

    Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K

    2012-01-01

    Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.

  18. Azimuthal asymmetries and the emergence of “collectivity” from multi-particle correlations in high-energy pA collisions

    DOE PAGES

    Dumitru, Adrian; McLerran, Larry; Skokov, Vladimir

    2015-02-23

    In this study, we show how angular asymmetries ~cos2φ can arise in dipole scattering at high energies. We illustrate the effects due to anisotropic fluctuations of the saturation momentum of the target with a finite correlation length in the transverse impact parameter plane, i.e. from a domain-like structure. We compute the two-particle azimuthal cumulant in this model including both one-particle factorizable as well as genuine two-particle non-factorizable contributions to the two-particle cross section. We also compute the full BBGKY hierarchy for the four-particle azimuthal cumulant and find that only the fully factorizable contribution to c 2{4} is negative while allmore » contributions from genuine two, three and four particle correlations are positive. Our results may provide some qualitative insight into the origin of azimuthal asymmetries in p + Pb collisions at the LHC which reveal a change of sign of c 2{4} in high multiplicity events. (author)« less

  19. Transport of passive scalars in a turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Kim, John; Moin, Parviz

    1987-01-01

    A direct numerical simulation of a turbulent channel flow with three passive scalars at different molecular Prandtl numbers is performed. Computed statistics including the turbulent Prandtl numbers are compared with existing experimental data. The computed fields are also examined to investigate the spatial structure of the scalar fields. The scalar fields are highly correlated with the streamwise velocity; the correlation coefficient between the temperature and the streamwise velocity is as high as 0.95 in the wall region. The joint probability distributions between the temperature and velocity fluctuations are also examined; they suggest that it might be possible to model the scalar fluxes in the wall region in a manner similar to the Reynolds stresses.

  20. Analytical and flight investigation of the influence of rotor and other high-order dynamics on helicopter flight-control system bandwidth

    NASA Technical Reports Server (NTRS)

    Chen, R. T. N.; Hindson, W. S.

    1985-01-01

    The increasing use of highly augmented digital flight-control systems in modern military helicopters prompted an examination of the influence of rotor dynamics and other high-order dynamics on control-system performance. A study was conducted at NASA Ames Research Center to correlate theoretical predictions of feedback gain limits in the roll axis with experimental test data obtained from a variable-stability research helicopter. Feedback gains, the break frequency of the presampling sensor filter, and the computational frame time of the flight computer were systematically varied. The results, which showed excellent theoretical and experimental correlation, indicate that the rotor-dynamics, sensor-filter, and digital-data processing delays can severely limit the usable values of the roll-rate and roll-attitude feedback gains.

  1. A comparative trial of paper-and-pencil versus computer administration of the Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire.

    PubMed

    Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P

    2001-02-01

    Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.

  2. Rapid and Reliable Binding Affinity Prediction of Bromodomain Inhibitors: A Computational Study

    PubMed Central

    2016-01-01

    Binding free energies of bromodomain inhibitors are calculated with recently formulated approaches, namely ESMACS (enhanced sampling of molecular dynamics with approximation of continuum solvent) and TIES (thermodynamic integration with enhanced sampling). A set of compounds is provided by GlaxoSmithKline, which represents a range of chemical functionality and binding affinities. The predicted binding free energies exhibit a good Spearman correlation of 0.78 with the experimental data from the 3-trajectory ESMACS, and an excellent correlation of 0.92 from the TIES approach where applicable. Given access to suitable high end computing resources and a high degree of automation, we can compute individual binding affinities in a few hours with precisions no greater than 0.2 kcal/mol for TIES, and no larger than 0.34 and 1.71 kcal/mol for the 1- and 3-trajectory ESMACS approaches. PMID:28005370

  3. A New Polar Magnetic Index of Geomagnetic Activity and its Application to Monitoring Ionospheric Parameters

    NASA Technical Reports Server (NTRS)

    Lyatsky, Wladislaw; Khazanov, George V.

    2008-01-01

    For improving the reliability of Space Weather prediction, we developed a new, Polar Magnetic (PM) index of geomagnetic activity, which shows high correlation with both upstream solar wind data and related events in the magnetosphere and ionosphere. Similarly to the existing polar cap PC index, the new, PM index was computed from data from two near-pole geomagnetic observatories; however, the method for computing the PM index is different. The high correlation of the PM index with both solar wind data and events in Geospace environment makes possible to improve significantly forecasting geomagnetic disturbances and such important parameters as the cross-polar-cap voltage and global Joule heating in high latitude ionosphere, which play an important role in the development of geomagnetic, ionospheric and thermospheric disturbances. We tested the PM index for 10-year period (1995-2004). The correlation between PM index and upstream solar wind data for these years is very high (the average correlation coefficient R approximately equal to 0.86). The PM index also shows the high correlation with the cross-polar-cap voltage and hemispheric Joule heating (the correlation coefficient between the actual and predicted values of these parameters is approximately 0.9), which results in significant increasing the prediction reliability of these parameters. Using the PM index of geomagnetic activity provides a significant increase in the forecasting reliability of geomagnetic disturbances and related events in Geospace environment. The PM index may be also used as an important input parameter in modeling ionospheric, magnetospheric, and thermospheric processes.

  4. Does Computer-Aided Formative Assessment Improve Learning Outcomes?

    ERIC Educational Resources Information Center

    Hannah, John; James, Alex; Williams, Phillipa

    2014-01-01

    Two first-year engineering mathematics courses used computer-aided assessment (CAA) to provide students with opportunities for formative assessment via a series of weekly quizzes. Most students used the assessment until they achieved very high (>90%) quiz scores. Although there is a positive correlation between these quiz marks and the final…

  5. Computer Aided Training of Cognitive Processing Strategies with Developmentally Handicapped Adults.

    ERIC Educational Resources Information Center

    Ryba, Kenneth A.; And Others

    1985-01-01

    Correlational results involving 60 developmentally handicaped adults indicated that a computerized cross-modal memory game had a highly significant relationship with most cognitive and motor coordination measures. Computer aided training was not effective in improving overall cognitive functioning. There was no evidence of cognitive skills being…

  6. Computerized History Games: Narrative Options

    ERIC Educational Resources Information Center

    Kee, Kevin

    2011-01-01

    How may historians best express history through computer games? This article suggests that the answer lies in correctly correlating historians' goals for teaching with the capabilities of different kinds of computer games. During the development of a game prototype for high school students, the author followed best practices as expressed in the…

  7. Parallel computing in experimental mechanics and optical measurement: A review (II)

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  8. Experiences with Probabilistic Analysis Applied to Controlled Systems

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Giesy, Daniel P.

    2004-01-01

    This paper presents a semi-analytic method for computing frequency dependent means, variances, and failure probabilities for arbitrarily large-order closed-loop dynamical systems possessing a single uncertain parameter or with multiple highly correlated uncertain parameters. The approach will be shown to not suffer from the same computational challenges associated with computing failure probabilities using conventional FORM/SORM techniques. The approach is demonstrated by computing the probabilistic frequency domain performance of an optimal feed-forward disturbance rejection scheme.

  9. Towards prediction of correlated material properties using quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Wagner, Lucas

    Correlated electron systems offer a richness of physics far beyond noninteracting systems. If we would like to pursue the dream of designer correlated materials, or, even to set a more modest goal, to explain in detail the properties and effective physics of known materials, then accurate simulation methods are required. Using modern computational resources, quantum Monte Carlo (QMC) techniques offer a way to directly simulate electron correlations. I will show some recent results on a few extremely challenging materials including the metal-insulator transition of VO2, the ground state of the doped cuprates, and the pressure dependence of magnetic properties in FeSe. By using a relatively simple implementation of QMC, at least some properties of these materials can be described truly from first principles, without any adjustable parameters. Using the QMC platform, we have developed a way of systematically deriving effective lattice models from the simulation. This procedure is particularly attractive for correlated electron systems because the QMC methods treat the one-body and many-body components of the wave function and Hamiltonian on completely equal footing. I will show some examples of using this downfolding technique and the high accuracy of QMC to connect our intuitive ideas about interacting electron systems with high fidelity simulations. The work in this presentation was supported in part by NSF DMR 1206242, the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award Number FG02-12ER46875, and the Center for Emergent Superconductivity, Department of Energy Frontier Research Center under Grant No. DEAC0298CH1088. Computing resources were provided by a Blue Waters Illinois grant and INCITE PhotSuper and SuperMatSim allocations.

  10. Computational Ghost Imaging for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In CGI, the measurement obtained from the reference arm (with the high-resolution detector) is replaced by a computational derivation of the measurement-plane intensity profile of the reference-arm beam. The algorithms applied to computational ghost imaging have diversified beyond simple correlation measurements, and now include modern reconstruction algorithms based on compressive sensing.

  11. The VLBA correlator: Real-time in the distributed era

    NASA Technical Reports Server (NTRS)

    Wells, D. C.

    1992-01-01

    The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.

  12. Matching and correlation computations in stereoscopic depth perception.

    PubMed

    Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro

    2011-03-02

    A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.

  13. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 3: Program correlation with full scale hardware tests

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.

    1980-01-01

    The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.

  14. Partial correlation-based functional connectivity analysis for functional near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Akın, Ata

    2017-12-01

    A theoretical framework, a partial correlation-based functional connectivity (PC-FC) analysis to functional near-infrared spectroscopy (fNIRS) data, is proposed. This is based on generating a common background signal from a high passed version of fNIRS data averaged over all channels as the regressor in computing the PC between pairs of channels. This approach has been employed to real data collected during a Stroop task. The results show a strong significance in the global efficiency (GE) metric computed by the PC-FC analysis for neutral, congruent, and incongruent stimuli (NS, CS, IcS; GEN=0.10±0.009, GEC=0.11±0.01, GEIC=0.13±0.015, p=0.0073). A positive correlation (r=0.729 and p=0.0259) is observed between the interference of reaction times (incongruent-neutral) and interference of GE values (GEIC-GEN) computed from [HbO] signals.

  15. USAR Recruiting Success Factors.

    DTIC Science & Technology

    1987-12-01

    scores, were used to predict production scores for each recruiter. Benchmark Achievement Scores (BAS) were computed by dividing total production bv...performance compnred to thil avigle would b ,’asl,_cr t.o compute . SAS correlated high y witi IIAS r .96 . so tihe two scores were practically equivalent...asked to make a sales pitch to a prospective enlistee about the benefits of Army life. Presen- tations were scored by computing the ratio of the

  16. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE PAGES

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; ...

    2017-10-25

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  17. Petascale supercomputing to accelerate the design of high-temperature alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al 2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviourmore » of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. As a result, the approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.« less

  18. Petascale supercomputing to accelerate the design of high-temperature alloys

    NASA Astrophysics Data System (ADS)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J. Allen

    2017-12-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ‧-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  19. A comparison of high-frequency cross-correlation measures

    NASA Astrophysics Data System (ADS)

    Precup, Ovidiu V.; Iori, Giulia

    2004-12-01

    On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.

  20. Identifying in vivo DCE MRI markers associated with microvessel architecture and gleason grades of prostate cancer.

    PubMed

    Singanamalli, Asha; Rusu, Mirabela; Sparks, Rachel E; Shih, Natalie N C; Ziober, Amy; Wang, Li-Ping; Tomaszewski, John; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2016-01-01

    To identify computer extracted in vivo dynamic contrast enhanced (DCE) MRI markers associated with quantitative histomorphometric (QH) characteristics of microvessels and Gleason scores (GS) in prostate cancer. This study considered retrospective data from 23 biopsy confirmed prostate cancer patients who underwent 3 Tesla multiparametric MRI before radical prostatectomy (RP). Representative slices from RP specimens were stained with vascular marker CD31. Tumor extent was mapped from RP sections onto DCE MRI using nonlinear registration methods. Seventy-seven microvessel QH features and 18 DCE MRI kinetic features were extracted and evaluated for their ability to distinguish low from intermediate and high GS. The effect of temporal sampling on kinetic features was assessed and correlations between those robust to temporal resolution and microvessel features discriminative of GS were examined. A total of 12 microvessel architectural features were discriminative of low and intermediate/high grade tumors with area under the receiver operating characteristic curve (AUC) > 0.7. These features were most highly correlated with mean washout gradient (WG) (max rho = -0.62). Independent analysis revealed WG to be moderately robust to temporal resolution (intraclass correlation coefficient [ICC] = 0.63) and WG variance, which was poorly correlated with microvessel features, to be predictive of low grade tumors (AUC = 0.77). Enhancement ratio was the most robust (ICC = 0.96) and discriminative (AUC = 0.78) kinetic feature but was moderately correlated with microvessel features (max rho = -0.52). Computer extracted features of prostate DCE MRI appear to be correlated with microvessel architecture and may be discriminative of low versus intermediate and high GS. © 2015 Wiley Periodicals, Inc.

  1. Social-Cognitive Correlates of Fruit and Vegetable Consumption in Minority and Non-Minority Youth

    ERIC Educational Resources Information Center

    Franko, Debra L.; Cousineau, Tara M.; Rodgers, Rachel F.; Roehrig, James P.; Hoffman, Jessica A.

    2013-01-01

    Objective: Inadequate fruit and vegetable (FV) consumption signals a need for identifying predictors and correlates of intake, particularly in diverse adolescents. Design: Participants completed an on-line assessment in early 2010. Setting: Computer classrooms in 4 high schools. Participants: One hundred twenty-two Caucasian and 125 minority…

  2. Agreement Mechanisms in Native and Nonnative Language Processing: Electrophysiological Correlates of Complexity and Interference

    ERIC Educational Resources Information Center

    Tanner, Darren

    2011-01-01

    This dissertation investigates the neural and behavioral correlates of grammatical agreement computation during language comprehension in native English speakers and highly advanced L1 Spanish-L2 English bilinguals. In a series of electrophysiological (event-related brain potential (ERP)) and behavioral (acceptability judgment and self-paced…

  3. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  4. Full 3-D OCT-based pseudophakic custom computer eye model

    PubMed Central

    Sun, M.; Pérez-Merino, P.; Martinez-Enriquez, E.; Velasco-Ocana, M.; Marcos, S.

    2016-01-01

    We compared measured wave aberrations in pseudophakic eyes implanted with aspheric intraocular lenses (IOLs) with simulated aberrations from numerical ray tracing on customized computer eye models, built using quantitative 3-D OCT-based patient-specific ocular geometry. Experimental and simulated aberrations show high correlation (R = 0.93; p<0.0001) and similarity (RMS for high order aberrations discrepancies within 23.58%). This study shows that full OCT-based pseudophakic custom computer eye models allow understanding the relative contribution of optical geometrical and surgically-related factors to image quality, and are an excellent tool for characterizing and improving cataract surgery. PMID:27231608

  5. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  6. Beyond BCS pairing in high-density neutron matter

    NASA Astrophysics Data System (ADS)

    Rios, A.; Ding, D.; Dussan, H.; Dickhoff, W. H.; Witte, S. J.; Polls, A.

    2018-01-01

    Pairing gaps in neutron matter need to be computed in a wide range of densities to address open questions in neutron star phenomenology. Traditionally, the Bardeen-Cooper-Schrieffer approach has been used to compute gaps from bare nucleon-nucleon interactions. Here, we incorporate the influence of short- and long-range correlations into pairing properties. Short-range correlations are treated including the appropriate fragmentation of single-particle states, and they suppress the gaps substantially. Long-range correlations dress the pairing interaction via density and spin modes, and provide a relatively small correction. We use three different interactions as a starting point to control for any systematic effects. Results are relevant for neutron-star cooling scenarios, in particular in view of the recent observational data on Cassiopeia A.

  7. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  8. Computational fluid dynamics evaluation of incomplete stent apposition in a tapered artery

    NASA Astrophysics Data System (ADS)

    Poon, Eric; Thondapu, Vikas; Ooi, Andrew; Hayat, Umair; Barlis, Peter; Moore, Stephen

    2015-11-01

    Coronary stents are deployed to prop open blocked arteries and restore normal blood flow, however in-stent restenosis (ISR) and stent thrombosis (ST) remain possibly catastrophic complications. Computational fluid dynamics (CFD) analyses can elucidate the pathological impact of alterations in coronary hemodynamics and correlate wall shear stress (WSS) with atherosclerotic processes. The natural tapering of a coronary artery often leads to proximal incomplete stent apposition (ISA) where stent struts are not in contact with the vessel wall. By employing state-of-the-art computer-aided design (CAD) software, generic open-cell and closed-cell coronary stent designs were virtually deployed in an idealised tapered coronary artery. Pulsatile blood flow (80 mL/min at 75 beats/min) was carried out numerically on these CAD models using a finite volume solver. CFD results reveal significant fluctuations in proximal WSS and large recirculation regions in the setting of proximal ISA, resulting in regions of high wall shear stress gradient (WSSG) that have been previously linked to poor endothelial cell coverage and vascular injury. The clinical significance of these proximal high WSSG regions will be correlated with findings from high-resolution in-vivo imaging. Supported by the Australian Research Council (LP120100233) and Victorian Life Sciences Computation Initiative (VR0210).

  9. Multicomponent Density Functional Theory: Impact of Nuclear Quantum Effects on Proton Affinities and Geometries.

    PubMed

    Brorsen, Kurt R; Yang, Yang; Hammes-Schiffer, Sharon

    2017-08-03

    Nuclear quantum effects such as zero point energy play a critical role in computational chemistry and often are included as energetic corrections following geometry optimizations. The nuclear-electronic orbital (NEO) multicomponent density functional theory (DFT) method treats select nuclei, typically protons, quantum mechanically on the same level as the electrons. Electron-proton correlation is highly significant, and inadequate treatments lead to highly overlocalized nuclear densities. A recently developed electron-proton correlation functional, epc17, has been shown to provide accurate nuclear densities for molecular systems. Herein, the NEO-DFT/epc17 method is used to compute the proton affinities for a set of molecules and to examine the role of nuclear quantum effects on the equilibrium geometry of FHF - . The agreement of the computed results with experimental and benchmark values demonstrates the promise of this approach for including nuclear quantum effects in calculations of proton affinities, pK a 's, optimized geometries, and reaction paths.

  10. Thermodynamics of Computational Copying in Biochemical Systems

    NASA Astrophysics Data System (ADS)

    Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein

    2017-04-01

    Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.

  11. The correlation of symptoms, pulmonary function tests and exercise testing with high-resolution computed tomography in patients with idiopathic interstitial pneumonia in a tertiary care hospital in South India.

    PubMed

    Isaac, Barney Thomas Jesudason; Thangakunam, Balamugesh; Cherian, Rekha A; Christopher, Devasahayam Jesudas

    2015-01-01

    For the follow-up of patients with idiopathic interstitial pneumonias (IIP), it is unclear which parameters of pulmonary function tests (PFT) and exercise testing would correlate best with high-resolution computed tomography (HRCT).. To find out the correlation of symptom scores, PFTs and exercise testing with HRCT scoring in patients diagnosed as idiopathic interstitial pneumonia. Cross-sectional study done in pulmonary medicine outpatients department of a tertiary care hospital in South India. Consecutive patients who were diagnosed as IIP by a standard algorithm were included into the study. Cough and dyspnea were graded for severity and duration. Pulmonary function tests and exercise testing parameters were noted. HRCT was scored based on an alveolar score, an interstitial score and a total score. The HRCT was correlated with each of the clinical and physiologic parameters. Pearson's/Spearman's correlation coefficient was used for the correlation of symptoms and parameters of ABG, PFT and 6MWT with the HRCT scores. A total of 94 patients were included in the study. Cough and dyspnea severity (r = 0.336 and 0.299), FVC (r = -0.48), TLC (r = -0.439) and DLCO and distance saturation product (DSP) (r = -0.368) and lowest saturation (r = -0.324) had significant correlation with total HRCT score. Among these, DLCO, particularly DLCO corrected % of predicted, correlated best with HRCT score (r = -0.721).. Symptoms, PFT and exercise testing had good correlation with HRCT. DLCO corrected % of predicted correlated best with HRCT.

  12. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  13. Efficiently accounting for ion correlations in electrokinetic nanofluidic devices using density functional theory.

    PubMed

    Gillespie, Dirk; Khair, Aditya S; Bardhan, Jaydeep P; Pennathur, Sumita

    2011-07-15

    The electrokinetic behavior of nanofluidic devices is dominated by the electrical double layers at the device walls. Therefore, accurate, predictive models of double layers are essential for device design and optimization. In this paper, we demonstrate that density functional theory (DFT) of electrolytes is an accurate and computationally efficient method for computing finite ion size effects and the resulting ion-ion correlations that are neglected in classical double layer theories such as Poisson-Boltzmann. Because DFT is derived from liquid-theory thermodynamic principles, it is ideal for nanofluidic systems with small spatial dimensions, high surface charge densities, high ion concentrations, and/or large ions. Ion-ion correlations are expected to be important in these regimes, leading to nonlinear phenomena such as charge inversion, wherein more counterions adsorb at the wall than is necessary to neutralize its surface charge, leading to a second layer of co-ions. We show that DFT, unlike other theories that do not include ion-ion correlations, can predict charge inversion and other nonlinear phenomena that lead to qualitatively different current densities and ion velocities for both pressure-driven and electro-osmotic flows. We therefore propose that DFT can be a valuable modeling and design tool for nanofluidic devices as they become smaller and more highly charged. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Reliable computation from contextual correlations

    NASA Astrophysics Data System (ADS)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  15. Quantitative immunohistochemistry of factor VIII-related antigen in breast carcinoma: a comparison of computer-assisted image analysis with established counting methods.

    PubMed

    Kohlberger, P D; Obermair, A; Sliutz, G; Heinzl, H; Koelbl, H; Breitenecker, G; Gitsch, G; Kainz, C

    1996-06-01

    Microvessel density in the area of the most intense neovascularization in invasive breast carcinoma is reported to be an independent prognostic factor. The established method of enumeration of microvessel density is to count the vessels using an ocular raster (counted microvessel density [CMVD]). The vessels were detected by staining endothelial cells using Factor VIII-related antigen. The aim of the study was to compare the CMVD results with the percentage of factor VIII-related antigen-stained area using computer-assisted image analysis. A true color red-green-blue (RGB) image analyzer based on a morphologically reduced instruction set computer processor was used to evaluate the area of stained endothelial cells. Sixty invasive breast carcinomas were included in the analysis. There was no significant correlation between the CMVD and the percentage of factor VIII-related antigen-stained area (Spearman correlation coefficient = 0.24, confidence interval = 0.02-0.46). Although high CMVD was significantly correlated with poorer recurrence free survival (P = .024), percentage of factor VIII-related antigen-stained area showed no prognostic value. Counted microvessel density and percentage of factor VIII-related antigen-stained area showed a highly significant correlation with vessel invasion (P = .0001 and P = .02, respectively). There was no correlation between CMVD and percentage of factor VIII-related antigen-stained area with other prognostic factors. In contrast to the CMVD within malignant tissue, the percentage of factor VIII-related antigen-stained area is not suitable as an indicator of prognosis in breast cancer patients.

  16. Development of an unsteady aerodynamics model to improve correlation of computed blade stresses with test data

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1985-01-01

    A reliable rotor aeroelastic analysis operational that correctly predicts the vibration levels for a helicopter is utilized to test various unsteady aerodynamics models with the objective of improving the correlation between test and theory. This analysis called Rotor Aeroelastic Vibration (RAVIB) computer program is based on a frequency domain forced response analysis which utilizes the transfer matrix techniques to model helicopter/rotor dynamic systems of varying degrees of complexity. The results for the AH-1G helicopter rotor were compared with the flight test data during high speed operation and they indicated a reasonably good correlation for the beamwise and chordwise blade bending moments, but for torsional moments the correlation was poor. As a result, a new aerodynamics model based on unstalled synthesized data derived from the large amplitude oscillating airfoil experiments was developed and tested.

  17. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  18. Advances in Parallel Computing and Databases for Digital Pathology in Cancer Research

    DTIC Science & Technology

    2016-11-13

    these technologies and how we have used them in the past. We are interested in learning more about the needs of clinical pathologists as we continue to...such as image processing and correlation. Further, High Performance Computing (HPC) paradigms such as the Message Passing Interface (MPI) have been...Defense for Research and Engineering. such as pMatlab [4], or bcMPI [5] can significantly reduce the need for deep knowledge of parallel computing. In

  19. Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.

    PubMed

    Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole

    2015-05-12

    Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.

  20. A Highly Accurate Face Recognition System Using Filtering Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko

    2007-09-01

    The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.

  1. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  2. Validation of CBCT for the computation of textural biomarkers

    NASA Astrophysics Data System (ADS)

    Paniagua, Beatriz; Ruellas, Antonio C.; Benavides, Erika; Marron, Steve; Wolford, Larry; Cevidanes, Lucia

    2015-03-01

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr- CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr- CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA.

  3. Validation of CBCT for the computation of textural biomarkers

    PubMed Central

    Paniagua, Beatriz; Ruellas, Antonio Carlos; Benavides, Erika; Marron, Steve; Woldford, Larry; Cevidanes, Lucia

    2015-01-01

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr-CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr-CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA. PMID:26085710

  4. Validation of CBCT for the computation of textural biomarkers.

    PubMed

    Paniagua, Beatriz; Ruellas, Antonio Carlos; Benavides, Erika; Marron, Steve; Woldford, Larry; Cevidanes, Lucia

    2015-03-17

    Osteoarthritis (OA) is associated with significant pain and 42.6% of patients with TMJ disorders present with evidence of TMJ OA. However, OA diagnosis and treatment remain controversial, since there are no clear symptoms of the disease. The subchondral bone in the TMJ is believed to play a major role in the progression of OA. We hypothesize that the textural imaging biomarkers computed in high resolution Conebeam CT (hr-CBCT) and μCT scans are comparable. The purpose of this study is to test the feasibility of computing textural imaging biomarkers in-vivo using hr-CBCT, compared to those computed in μCT scans as our Gold Standard. Specimens of condylar bones obtained from condylectomies were scanned using μCT and hr-CBCT. Nine different textural imaging biomarkers (four co-occurrence features and five run-length features) from each pair of μCT and hr-CBCT were computed and compared. Pearson correlation coefficients were computed to compare textural biomarkers values of μCT and hr-CBCT. Four of the nine computed textural biomarkers showed a strong positive correlation between biomarkers computed in μCT and hr-CBCT. Higher correlations in Energy and Contrast, and in GLN (grey-level non-uniformity) and RLN (run length non-uniformity) indicate quantitative texture features can be computed reliably in hr-CBCT, when compared with μCT. The textural imaging biomarkers computed in-vivo hr-CBCT have captured the structure, patterns, contrast between neighboring regions and uniformity of healthy and/or pathologic subchondral bone. The ability to quantify bone texture non-invasively now makes it possible to evaluate the progression of subchondral bone alterations, in TMJ OA.

  5. SU-E-T-664: Radiobiological Modeling of Prophylactic Cranial Irradiation in Mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D; Debeb, B; Woodward, W

    Purpose: Prophylactic cranial irradiation (PCI) is a clinical technique used to reduce the incidence of brain metastasis and improve overall survival in select patients with ALL and SCLC, and we have shown the potential of PCI in select breast cancer patients through a mouse model (manuscript in preparation). We developed a computational model using our experimental results to demonstrate the advantage of treating brain micro-metastases early. Methods: MATLAB was used to develop the computational model of brain metastasis and PCI in mice. The number of metastases per mouse and the volume of metastases from four- and eight-week endpoints were fitmore » to normal and log-normal distributions, respectively. Model input parameters were optimized so that model output would match the experimental number of metastases per mouse. A limiting dilution assay was performed to validate the model. The effect of radiation at different time points was computationally evaluated through the endpoints of incidence, number of metastases, and tumor burden. Results: The correlation between experimental number of metastases per mouse and the Gaussian fit was 87% and 66% at the two endpoints. The experimental volumes and the log-normal fit had correlations of 99% and 97%. In the optimized model, the correlation between number of metastases per mouse and the Gaussian fit was 96% and 98%. The log-normal volume fit and the model agree 100%. The model was validated by a limiting dilution assay, where the correlation was 100%. The model demonstrates that cells are very sensitive to radiation at early time points, and delaying treatment introduces a threshold dose at which point the incidence and number of metastases decline. Conclusion: We have developed a computational model of brain metastasis and PCI in mice that is highly correlated to our experimental data. The model shows that early treatment of subclinical disease is highly advantageous.« less

  6. Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization

    NASA Astrophysics Data System (ADS)

    Dodge, Doug; Walter, William; Myers, Steve; Ford, Sean; Harris, Dave; Ruppert, Stan; Buttler, Dave; Hauk, Terri

    2013-04-01

    The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory(LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.

  7. Exploring Large-Scale Cross-Correlation for Teleseismic and Regional Seismic Event Characterization

    NASA Astrophysics Data System (ADS)

    Dodge, D.; Walter, W. R.; Myers, S. C.; Ford, S. R.; Harris, D.; Ruppert, S.; Buttler, D.; Hauk, T. F.

    2012-12-01

    The decrease in costs of both digital storage space and computation power invites new methods of seismic data processing. At Lawrence Livermore National Laboratory (LLNL) we operate a growing research database of seismic events and waveforms for nuclear explosion monitoring and other applications. Currently the LLNL database contains several million events associated with tens of millions of waveforms at thousands of stations. We are making use of this database to explore the power of seismic waveform correlation to quantify signal similarities, to discover new events not in catalogs, and to more accurately locate events and identify source types. Building on the very efficient correlation methodologies of Harris and Dodge (2011) we computed the waveform correlation for event pairs in the LLNL database in two ways. First we performed entire waveform cross-correlation over seven distinct frequency bands. The correlation coefficient exceeds 0.6 for more than 40 million waveform pairs for several hundred thousand events at more than a thousand stations. These correlations reveal clusters of mining events and aftershock sequences, which can be used to readily identify and locate events. Second we determine relative pick times by correlating signals in time windows for distinct seismic phases. These correlated picks are then used to perform very high accuracy event relocations. We are examining the percentage of events that correlate as a function of magnitude and observing station distance in selected high seismicity regions. Combining these empirical results and those using synthetic data, we are working to quantify relationships between correlation and event pair separation (in epicenter and depth) as well as mechanism differences. Our exploration of these techniques on a large seismic database is in process and we will report on our findings in more detail at the meeting.

  8. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  9. Complex-valued time-series correlation increases sensitivity in FMRI analysis.

    PubMed

    Kociuba, Mary C; Rowe, Daniel B

    2016-07-01

    To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Effect of inhibitory feedback on correlated firing of spiking neural network.

    PubMed

    Xie, Jinli; Wang, Zhijie

    2013-08-01

    Understanding the properties and mechanisms that generate different forms of correlation is critical for determining their role in cortical processing. Researches on retina, visual cortex, sensory cortex, and computational model have suggested that fast correlation with high temporal precision appears consistent with common input, and correlation on a slow time scale likely involves feedback. Based on feedback spiking neural network model, we investigate the role of inhibitory feedback in shaping correlations on a time scale of 100 ms. Notably, the relationship between the correlation coefficient and inhibitory feedback strength is non-monotonic. Further, computational simulations show how firing rate and oscillatory activity form the basis of the mechanisms underlying this relationship. When the mean firing rate holds unvaried, the correlation coefficient increases monotonically with inhibitory feedback, but the correlation coefficient keeps decreasing when the network has no oscillatory activity. Our findings reveal that two opposing effects of the inhibitory feedback on the firing activity of the network contribute to the non-monotonic relationship between the correlation coefficient and the strength of the inhibitory feedback. The inhibitory feedback affects the correlated firing activity by modulating the intensity and regularity of the spike trains. Finally, the non-monotonic relationship is replicated with varying transmission delay and different spatial network structure, demonstrating the universality of the results.

  11. Visual computed tomographic scoring of emphysema and its correlation with its diagnostic electrocardiographic sign: the frontal P vector.

    PubMed

    Chhabra, Lovely; Sareen, Pooja; Gandagule, Amit; Spodick, David H

    2012-03-01

    Verticalization of the frontal P vector in patients older than 45 years is virtually diagnostic of pulmonary emphysema (sensitivity, 96%; specificity, 87%). We investigated the correlation of P vector and the computed tomographic visual score of emphysema (VSE) in patients with established diagnosis of chronic obstructive pulmonary disease/emphysema. High-resolution computed tomographic scans of 26 patients with emphysema (age, >45 years) were reviewed to assess the type and extent of emphysema using the subjective visual scoring. Electrocardiograms were independently reviewed to determine the frontal P vector. The P vector and VSE were compared for statistical correlation. Both P vector and VSE were also directly compared with the forced expiratory volume at 1 second. The VSE and the orientation of the P vector (ÂP) had an overall significant positive correlation (r = +0.68; P = .0001) in all patients, but the correlation was very strong in patients with predominant lower-lobe emphysema (r = +0.88; P = .0004). Forced expiratory volume at 1 second and ÂP had almost a linear inverse correlation in predominant lower-lobe emphysema (r = -0.92; P < .0001). Orientation of the P vector positively correlates with visually scored emphysema. Both ÂP and VSE are strong reflectors of qualitative lung function in patients with predominant lower-lobe emphysema. A combination of more vertical ÂP and predominant lower-lobe emphysema reflects severe obstructive lung dysfunction. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. The Software Correlator of the Chinese VLBI Network

    NASA Technical Reports Server (NTRS)

    Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli

    2010-01-01

    The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.

  13. The relation between anxiety and BMI - is it all in our curves?

    PubMed

    Haghighi, Mohammad; Jahangard, Leila; Ahmadpanah, Mohammad; Bajoghli, Hafez; Holsboer-Trachsler, Edith; Brand, Serge

    2016-01-30

    The relation between anxiety and excessive weight is unclear. The aims of the present study were three-fold: First, we examined the association between anxiety and Body Mass Index (BMI). Second, we examined this association separately for female and male participants. Next, we examined both linear and non-linear associations between anxiety and BMI. The BMI was assessed of 92 patients (mean age: M=27.52; 57% females) suffering from anxiety disorders. Patients completed the Beck Anxiety Inventory. Both linear and non-linear correlations were computed for the sample as a whole and separately by gender. No gender differences were observed in anxiety scores or BMI. No linear correlation between anxiety scores and BMI was observed. In contrast, a non-linear correlation showed an inverted U-shaped association, with lower anxiety scores both for lower and very high BMI indices, and higher anxiety scores for medium to high BMI indices. Separate computations revealed no differences between males and females. The pattern of results suggests that the association between BMI and anxiety is complex and more accurately captured with non-linear correlations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. High Frequency Sampling of TTL Pulses on a Raspberry Pi for Diffuse Correlation Spectroscopy Applications.

    PubMed

    Tivnan, Matthew; Gurjar, Rajan; Wolf, David E; Vishwanath, Karthik

    2015-08-12

    Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications.

  15. High Frequency Sampling of TTL Pulses on a Raspberry Pi for Diffuse Correlation Spectroscopy Applications

    PubMed Central

    Tivnan, Matthew; Gurjar, Rajan; Wolf, David E.; Vishwanath, Karthik

    2015-01-01

    Diffuse Correlation Spectroscopy (DCS) is a well-established optical technique that has been used for non-invasive measurement of blood flow in tissues. Instrumentation for DCS includes a correlation device that computes the temporal intensity autocorrelation of a coherent laser source after it has undergone diffuse scattering through a turbid medium. Typically, the signal acquisition and its autocorrelation are performed by a correlation board. These boards have dedicated hardware to acquire and compute intensity autocorrelations of rapidly varying input signal and usually are quite expensive. Here we show that a Raspberry Pi minicomputer can acquire and store a rapidly varying time-signal with high fidelity. We show that this signal collected by a Raspberry Pi device can be processed numerically to yield intensity autocorrelations well suited for DCS applications. DCS measurements made using the Raspberry Pi device were compared to those acquired using a commercial hardware autocorrelation board to investigate the stability, performance, and accuracy of the data acquired in controlled experiments. This paper represents a first step toward lowering the instrumentation cost of a DCS system and may offer the potential to make DCS become more widely used in biomedical applications. PMID:26274961

  16. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation.

    PubMed

    Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge

    2014-09-01

    An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. High-Degree Neurons Feed Cortical Computations

    PubMed Central

    Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.

    2016-01-01

    Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884

  18. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  19. Comparison of high pressure transient PVT measurements and model predictions. Part I.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felver, Todd G.; Paradiso, Nicholas Joseph; Evans, Gregory Herbert

    2010-07-01

    A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must bemore » confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.« less

  20. Specific Features of Pressure-Fluctuation Fields in the Vicinity of a Forward-Facing Step-Backward-Facing Step Configuration

    NASA Astrophysics Data System (ADS)

    Golubev, A. Yu.

    2018-01-01

    A computational model of inhomogeneous pressure-fluctuation fields in the vicinity of a forward-facing step-backward-facing step configuration taking into account the high degree of their mutual correlation (global correlation) is generalized from experimental data. It is shown that when determining the characteristics of pressure fluctuations that act on an elastic structure, the global correlation is represented by an additional inhomogeneous field. It is demonstrated that a high degree of correlation may lead to a significant change in the main characteristics of the pressure-fluctuation field in the wake behind the configuration. This is taken into consideration in the model by correcting the local properties of this field.

  1. Viscous-flow analysis of a subsonic transport aircraft high-lift system and correlation with flight data

    NASA Technical Reports Server (NTRS)

    Potter, R. C.; Vandam, C. P.

    1995-01-01

    High-lift system aerodynamics has been gaining attention in recent years. In an effort to improve aircraft performance, comprehensive studies of multi-element airfoil systems are being undertaken in wind-tunnel and flight experiments. Recent developments in Computational Fluid Dynamics (CFD) offer a relatively inexpensive alternative for studying complex viscous flows by numerically solving the Navier-Stokes (N-S) equations. Current limitations in computer resources restrict practical high-lift N-S computations to two dimensions, but CFD predictions can yield tremendous insight into flow structure, interactions between airfoil elements, and effects of changes in airfoil geometry or free-stream conditions. These codes are very accurate when compared to strictly 2D data provided by wind-tunnel testing, as will be shown here. Yet, additional challenges must be faced in the analysis of a production aircraft wing section, such as that of the NASA Langley Transport Systems Research Vehicle (TSRV). A primary issue is the sweep theory used to correlate 2D predictions with 3D flight results, accounting for sweep, taper, and finite wing effects. Other computational issues addressed here include the effects of surface roughness of the geometry, cove shape modeling, grid topology, and transition specification. The sensitivity of the flow to changing free-stream conditions is investigated. In addition, the effects of Gurney flaps on the aerodynamic characteristics of the airfoil system are predicted.

  2. Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds

    NASA Technical Reports Server (NTRS)

    Boppe, Charles W.

    1987-01-01

    A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.; Marcy, Peter W.

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  4. Co-occurrence of addictive behaviours: personality factors related to substance use, gambling and computer gaming.

    PubMed

    Walther, Birte; Morgenstern, Matthis; Hanewinkel, Reiner

    2012-01-01

    To investigate co-occurrence and shared personality characteristics of problematic computer gaming, problematic gambling and substance use. Cross-sectional survey data were collected from 2,553 German students aged 12-25 years. Self-report measures of substance use (alcohol, tobacco and cannabis), problematic gambling (South Oaks Gambling Screen - Revised for Adolescents, SOGS-RA), problematic computer gaming (Video Game Dependency Scale, KFN-CSAS-II), and of twelve different personality characteristics were obtained. Analyses revealed positive correlations between tobacco, alcohol and cannabis use and a smaller positive correlation between problematic gambling and problematic computer gaming. Problematic computer gaming co-occurred only with cannabis use, whereas problematic gambling was associated with all three types of substance use. Multivariate multilevel analyses showed differential patterns of personality characteristics. High impulsivity was the only personality characteristic associated with all five addictive behaviours. Depression and extraversion were specific to substance users. Four personality characteristics were specifically associated with problematic computer gaming: irritability/aggression, social anxiety, ADHD, and low self-esteem. Problematic gamblers seem to be more similar to substance users than problematic computer gamers. From a personality perspective, results correspond to the inclusion of gambling in the same DSM-V category as substance use and question a one-to-one proceeding for computer gaming. Copyright © 2012 S. Karger AG, Basel.

  5. Quantitative CT analysis of honeycombing area in idiopathic pulmonary fibrosis: Correlations with pulmonary function tests.

    PubMed

    Nakagawa, Hiroaki; Nagatani, Yukihiro; Takahashi, Masashi; Ogawa, Emiko; Tho, Nguyen Van; Ryujin, Yasushi; Nagao, Taishi; Nakano, Yasutaka

    2016-01-01

    The 2011 official statement of idiopathic pulmonary fibrosis (IPF) mentions that the extent of honeycombing and the worsening of fibrosis on high-resolution computed tomography (HRCT) in IPF are associated with the increased risk of mortality. However, there are few reports about the quantitative computed tomography (CT) analysis of honeycombing area. In this study, we first proposed a computer-aided method for quantitative CT analysis of honeycombing area in patients with IPF. We then evaluated the correlations between honeycombing area measured by the proposed method with that estimated by radiologists or with parameters of PFTs. Chest HRCTs and pulmonary function tests (PFTs) of 36 IPF patients, who were diagnosed using HRCT alone, were retrospectively evaluated. Two thoracic radiologists independently estimated the honeycombing area as Identified Area (IA) and the percentage of honeycombing area to total lung area as Percent Area (PA) on 3 axial CT slices for each patient. We also developed a computer-aided method to measure the honeycombing area on CT images of those patients. The total honeycombing area as CT honeycombing area (HA) and the percentage of honeycombing area to total lung area as CT %honeycombing area (%HA) were derived from the computer-aided method for each patient. HA derived from three CT slices was significantly correlated with IA (ρ=0.65 for Radiologist 1 and ρ=0.68 for Radiologist 2). %HA derived from three CT slices was also significantly correlated with PA (ρ=0.68 for Radiologist 1 and ρ=0.70 for Radiologist 2). HA and %HA derived from all CT slices were significantly correlated with FVC (%pred.), DLCO (%pred.), and the composite physiologic index (CPI) (HA: ρ=-0.43, ρ=-0.56, ρ=0.63 and %HA: ρ=-0.60, ρ=-0.49, ρ=0.69, respectively). The honeycombing area measured by the proposed computer-aided method was correlated with that estimated by expert radiologists and with parameters of PFTs. This quantitative CT analysis of honeycombing area may be useful and reliable in patients with IPF. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Using Q-Chem on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    initio quantum chemistry package with special strengths in excited state methods, non-adiabatic coupling , solvation models, explicitly correlated wavefunction methods, and cutting-edge DFT. Running Q-Chem on

  7. Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments

    NASA Astrophysics Data System (ADS)

    Tsuji, Daisuke; Suyama, Kenji

    This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.

  8. Chromatographic and computational assessment of lipophilicity using sum of ranking differences and generalized pair-correlation.

    PubMed

    Andrić, Filip; Héberger, Károly

    2015-02-06

    Lipophilicity (logP) represents one of the most studied and most frequently used fundamental physicochemical properties. At present there are several possibilities for its quantitative expression and many of them stems from chromatographic experiments. Numerous attempts have been made to compare different computational methods, chromatographic methods vs. computational approaches, as well as chromatographic methods and direct shake-flask procedure without definite results or these findings are not accepted generally. In the present work numerous chromatographically derived lipophilicity measures in combination with diverse computational methods were ranked and clustered using the novel variable discrimination and ranking approaches based on the sum of ranking differences and the generalized pair correlation method. Available literature logP data measured on HILIC, and classical reversed-phase combining different classes of compounds have been compared with most frequently used multivariate data analysis techniques (principal component and hierarchical cluster analysis) as well as with the conclusions in the original sources. Chromatographic lipophilicity measures obtained under typical reversed-phase conditions outperform the majority of computationally estimated logPs. Oppositely, in the case of HILIC none of the many proposed chromatographic indices overcomes any of the computationally assessed logPs. Only two of them (logkmin and kmin) may be selected as recommended chromatographic lipophilicity measures. Both ranking approaches, sum of ranking differences and generalized pair correlation method, although based on different backgrounds, provides highly similar variable ordering and grouping leading to the same conclusions. Copyright © 2015. Published by Elsevier B.V.

  9. A Radio-genomics Approach for Identifying High Risk Estrogen Receptor-positive Breast Cancers on DCE-MRI: Preliminary Results in Predicting OncotypeDX Risk Scores

    PubMed Central

    Wan, Tao; Bloch, B. Nicolas; Plecha, Donna; Thompson, CheryI L.; Gilmore, Hannah; Jaffe, Carl; Harris, Lyndsay; Madabhushi, Anant

    2016-01-01

    To identify computer extracted imaging features for estrogen receptor (ER)-positive breast cancers on dynamic contrast en-hanced (DCE)-MRI that are correlated with the low and high OncotypeDX risk categories. We collected 96 ER-positivebreast lesions with low (<18, N = 55) and high (>30, N = 41) OncotypeDX recurrence scores. Each lesion was quantitatively charac-terize via 6 shape features, 3 pharmacokinetics, 4 enhancement kinetics, 4 intensity kinetics, 148 textural kinetics, 5 dynamic histogram of oriented gradient (DHoG), and 6 dynamic local binary pattern (DLBP) features. The extracted features were evaluated by a linear discriminant analysis (LDA) classifier in terms of their ability to distinguish low and high OncotypeDX risk categories. Classification performance was evaluated by area under the receiver operator characteristic curve (Az). The DHoG and DLBP achieved Az values of 0.84 and 0.80, respectively. The 6 top features identified via feature selection were subsequently combined with the LDA classifier to yield an Az of 0.87. The correlation analysis showed that DHoG (ρ = 0.85, P < 0.001) and DLBP (ρ = 0.83, P < 0.01) were significantly associated with the low and high risk classifications from the OncotypeDX assay. Our results indicated that computer extracted texture features of DCE-MRI were highly correlated with the high and low OncotypeDX risk categories for ER-positive cancers. PMID:26887643

  10. Validation of a novel computer-assisted sperm analysis (CASA) system using multitarget-tracking algorithms.

    PubMed

    Tomlinson, Mathew James; Pooley, Karen; Simpson, Tracey; Newton, Thomas; Hopkisson, James; Jayaprakasan, Kannamanadias; Jayaprakasan, Rajisha; Naeem, Asad; Pridmore, Tony

    2010-04-01

    To determine the accuracy and precision of a novel computer-assisted sperm analysis (CASA) system by comparison with existing recommended manual methods. Prospective study using comparative measurements of sperm concentration and motility on latex beads and immotile and motile sperm. Tertiary referral fertility center with strong academic links. Sperm donors and male partners of couples attending for fertility investigations. None. Achievement of Accubead target value for high and low concentration suspensions. Repeatability as demonstrated by coefficients of variation and intraclass correlation coefficients. Correlation and limits of agreement between CASA and manual methods. The CASA measurements of latex beads and sperm concentrations demonstrated a high level of accuracy and repeatability. Repeated Accubead measurements attained the required target value (mean difference from target of 2.61% and 3.71% for high- and low-concentration suspensions, respectively) and were highly reproducible. Limits of agreement analysis suggested that manual and CASA counts compared directly could be deemed to be interchangeable. Manual and CASA motility measurements were highly correlated for grades a, b, and d but could not be deemed to be interchangeable, and manual motility estimates were consistently higher for motile sperm. The novel CASA system was able to provide semen quality measurements for sperm concentration and motility measurements which were at least as reliable as current manual methods. Copyright 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  11. Regression relation for pure quantum states and its implications for efficient computing.

    PubMed

    Elsayed, Tarek A; Fine, Boris V

    2013-02-15

    We obtain a modified version of the Onsager regression relation for the expectation values of quantum-mechanical operators in pure quantum states of isolated many-body quantum systems. We use the insights gained from this relation to show that high-temperature time correlation functions in many-body quantum systems can be controllably computed without complete diagonalization of the Hamiltonians, using instead the direct integration of the Schrödinger equation for randomly sampled pure states. This method is also applicable to quantum quenches and other situations describable by time-dependent many-body Hamiltonians. The method implies exponential reduction of the computer memory requirement in comparison with the complete diagonalization. We illustrate the method by numerically computing infinite-temperature correlation functions for translationally invariant Heisenberg chains of up to 29 spins 1/2. Thereby, we also test the spin diffusion hypothesis and find it in a satisfactory agreement with the numerical results. Both the derivation of the modified regression relation and the justification of the computational method are based on the notion of quantum typicality.

  12. Noise-immune complex correlation for vasculature imaging based on standard and Jones-matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.

  13. Metabolomics of Breast Cancer Using High-Resolution Magic Angle Spinning Magnetic Resonance Spectroscopy: Correlations with 18F-FDG Positron Emission Tomography-Computed Tomography, Dynamic Contrast-Enhanced and Diffusion-Weighted Imaging MRI.

    PubMed

    Yoon, Haesung; Yoon, Dahye; Yun, Mijin; Choi, Ji Soo; Park, Vivian Youngjean; Kim, Eun-Kyung; Jeong, Joon; Koo, Ja Seung; Yoon, Jung Hyun; Moon, Hee Jung; Kim, Suhkmann; Kim, Min Jung

    2016-01-01

    Our goal in this study was to find correlations between breast cancer metabolites and conventional quantitative imaging parameters using high-resolution magic angle spinning (HR-MAS) magnetic resonance spectroscopy (MRS) and to find breast cancer subgroups that show high correlations between metabolites and imaging parameters. Between August 2010 and December 2013, we included 53 female patients (mean age 49.6 years; age range 32-75 years) with a total of 53 breast lesions assessed by the Breast Imaging Reporting and Data System. They were enrolled under the following criteria: breast lesions larger than 1 cm in diameter which 1) were suspicious for malignancy on mammography or ultrasound (US), 2) were pathologically confirmed to be breast cancer with US-guided core-needle biopsy (CNB) 3) underwent 3 Tesla MRI with dynamic contrast-enhanced (DCE) and diffusion-weighted imaging (DWI) and positron emission tomography-computed tomography (PET-CT), and 4) had an attainable immunohistochemistry profile from CNB. We acquired spectral data by HR-MAS MRS with CNB specimens and expressed the data as relative metabolite concentrations. We compared the metabolites with the signal enhancement ratio (SER), maximum standardized FDG uptake value (SUV max), apparent diffusion coefficient (ADC), and histopathologic prognostic factors for correlation. We calculated Spearman correlations and performed a partial least squares-discriminant analysis (PLS-DA) to further classify patient groups into subgroups to find correlation differences between HR-MAS spectroscopic values and conventional imaging parameters. In a multivariate analysis, the PLS-DA models built with HR-MAS MRS metabolic profiles showed visible discrimination between high and low SER, SUV, and ADC. In luminal subtype breast cancer, compared to all cases, high SER, ADV, and SUV were more closely clustered by visual assessment. Multiple metabolites were correlated with SER and SUV in all cases. Multiple metabolites showed correlations with SER and SUV in the ER positive, HER2 negative, and Ki-67 negative groups. High levels of PC, choline, and glycine acquired from HR-MAS MRS using CNB specimens were noted in the high SER group via DCE MRI and the high SUV group via PET-CT, with significant correlations between choline and SER and between PC and SUV. Further studies should investigate whether HR-MAS MRS using CNB specimens can provide similar or more prognostic information than conventional quantitative imaging parameters.

  14. A multiparametric assay for quantitative nerve regeneration evaluation.

    PubMed

    Weyn, B; van Remoortere, M; Nuydens, R; Meert, T; van de Wouwer, G

    2005-08-01

    We introduce an assay for the semi-automated quantification of nerve regeneration by image analysis. Digital images of histological sections of regenerated nerves are recorded using an automated inverted microscope and merged into high-resolution mosaic images representing the entire nerve. These are analysed by a dedicated image-processing package that computes nerve-specific features (e.g. nerve area, fibre count, myelinated area) and fibre-specific features (area, perimeter, myelin sheet thickness). The assay's performance and correlation of the automatically computed data with visually obtained data are determined on a set of 140 semithin sections from the distal part of a rat tibial nerve from four different experimental treatment groups (control, sham, sutured, cut) taken at seven different time points after surgery. Results show a high correlation between the manually and automatically derived data, and a high discriminative power towards treatment. Extra value is added by the large feature set. In conclusion, the assay is fast and offers data that currently can be obtained only by a combination of laborious and time-consuming tests.

  15. Creating a Computer Adaptive Test Version of the Late-Life Function & Disability Instrument

    PubMed Central

    Jette, Alan M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Moed, Richard

    2009-01-01

    Background This study applied Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies to develop a prototype function and disability assessment instrument for use in aging research. Herein, we report on the development of the CAT version of the Late-Life Function & Disability instrument (Late-Life FDI) and evaluate its psychometric properties. Methods We employed confirmatory factor analysis, IRT methods, validation, and computer simulation analyses of data collected from 671 older adults residing in residential care facilities. We compared accuracy, precision, and sensitivity to change of scores from CAT versions of two Late-Life FDI scales with scores from the fixed-form instrument. Score estimates from the prototype CAT versus the original instrument were compared in a sample of 40 older adults. Results Distinct function and disability domains were identified within the Late-Life FDI item bank and used to construct two prototype CAT scales. Using retrospective data, scores from computer simulations of the prototype CAT scales were highly correlated with scores from the original instrument. The results of computer simulation, accuracy, precision, and sensitivity to change of the CATs closely approximated those of the fixed-form scales, especially for the 10- or 15-item CAT versions. In the prospective study each CAT was administered in less than 3 minutes and CAT scores were highly correlated with scores generated from the original instrument. Conclusions CAT scores of the Late-Life FDI were highly comparable to those obtained from the full-length instrument with a small loss in accuracy, precision, and sensitivity to change. PMID:19038841

  16. Surface temperature statistics over Los Angeles - The influence of land use

    NASA Technical Reports Server (NTRS)

    Dousset, Benedicte

    1991-01-01

    Surface temperature statistics from 84 NOAA AVHRR (Advanced Very High Resolution Radiometer) satellite images of the Los Angeles basin are interpreted as functions of the corresponding urban land-cover classified from a multispectral SPOT image. Urban heat islands observed in the temperature statistics correlate well with the distribution of industrial and fully built areas. Small cool islands coincide with highly watered parks and golf courses. There is a significant negative correlation between the afternoon surface temperature and a vegetation index computed from the SPOT image.

  17. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  18. Relationships (I) of International Classification of High-resolution Computed Tomography for Occupational and Environmental Respiratory Diseases with the ILO International Classification of Radiographs of Pneumoconioses for parenchymal abnormalities.

    PubMed

    Tamura, Taro; Suganuma, Narufumi; Hering, Kurt G; Vehmas, Tapio; Itoh, Harumi; Akira, Masanori; Takashima, Yoshihiro; Hirano, Harukazu; Kusaka, Yukinori

    2015-01-01

    The International Classification of High-resolution Computed Tomography (HRCT) for Occupational and Environmental Respiratory Diseases (ICOERD) has been developed for the screening, diagnosis, and epidemiological reporting of respiratory diseases caused by occupational hazards. This study aimed to establish a correlation between readings of HRCT (according to the ICOERD) and those of chest radiography (CXR) pneumoconiotic parenchymal opacities (according to the International Labor Organization Classification/International Classification of Radiographs of Pneumoconioses [ILO/ICRP]). Forty-six patients with and 28 controls without mineral dust exposure underwent posterior-anterior CXR and HRCT. We recorded all subjects' exposure and smoking history. Experts independently read CXRs (using ILO/ICRP). Experts independently assessed HRCT using the ICOERD parenchymal abnormalities grades for well-defined rounded opacities (RO), linear and/or irregular opacities (IR), and emphysema (EM). The correlation between the ICOERD summed grades and ILO/ICRP profusions was evaluated using Spearman's rank-order correlation. Twenty-three patients had small opacities on CXR. HRCT showed that 21 patients had RO; 20 patients, IR opacities; and 23 patients, EM. The correlation between ILO/ICRP profusions and the ICOERD grades was 0.844 for rounded opacities (p<0.01). ICOERD readings from HRCT scans correlated well with previously validated ILO/ICRP criteria. The ICOERD adequately detects pneumoconiotic micronodules and can be used for the interpretation of pneumoconiosis.

  19. A National Study of the Relationship between Home Access to a Computer and Academic Performance Scores of Grade 12 U.S. Science Students: An Analysis of the 2009 NAEP Data

    NASA Astrophysics Data System (ADS)

    Coffman, Mitchell Ward

    The purpose of this dissertation was to examine the relationship between student access to a computer at home and academic achievement. The 2009 National Assessment of Educational Progress (NAEP) dataset was probed using the National Data Explorer (NDE) to investigate correlations in the subsets of SES, Parental Education, Race, and Gender as it relates to access of a home computer and improved performance scores for U.S. public school grade 12 science students. A causal-comparative approach was employed seeking clarity on the relationship between home access and performance scores. The influence of home access cannot overcome the challenges students of lower SES face. The achievement gap, or a second digital divide, for underprivileged classes of students, including minorities does not appear to contract via student access to a home computer. Nonetheless, in tests for significance, statistically significant improvement in science performance scores was reported for those having access to a computer at home compared to those not having access. Additionally, regression models reported evidence of correlations between and among subsets of controls for the demographic factors gender, race, and socioeconomic status. Variability in these correlations was high; suggesting influence from unobserved factors may have more impact upon the dependent variable. Having access to a computer at home increases performance scores for grade 12 general science students of all races, genders and socioeconomic levels. However, the performance gap is roughly equivalent to the existing performance gap of the national average for science scores, suggesting little influence from access to a computer on academic achievement. The variability of scores reported in the regression analysis models reflects a moderate to low effect, suggesting an absence of causation. These statistical results are accurate and confirm the literature review, whereby having access to a computer at home and the predictor variables were found to have a significant impact on performance scores, although the data presented suggest computer access at home is less influential upon performance scores than poverty and its correlates.

  20. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  1. Online blind source separation using incremental nonnegative matrix factorization with volume constraint.

    PubMed

    Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei

    2011-04-01

    Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.

  2. Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin

    2018-01-04

    In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment-trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. Copyright © 2018 Montesinos-Lopez et al.

  3. Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C.; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin

    2018-01-01

    In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. PMID:29097376

  4. Lung Ultrasonography in Patients With Idiopathic Pulmonary Fibrosis: Evaluation of a Simplified Protocol With High-Resolution Computed Tomographic Correlation.

    PubMed

    Vassalou, Evangelia E; Raissaki, Maria; Magkanas, Eleftherios; Antoniou, Katerina M; Karantanas, Apostolos H

    2018-03-01

    To compare a simplified ultrasonographic (US) protocol in 2 patient positions with the same-positioned comprehensive US assessments and high-resolution computed tomographic (CT) findings in patients with idiopathic pulmonary fibrosis. Twenty-five consecutive patients with idiopathic pulmonary fibrosis were prospectively enrolled and examined in 2 sessions. During session 1, patients were examined with a US protocol including 56 lung intercostal spaces in supine/sitting (supine/sitting comprehensive protocol) and lateral decubitus (decubitus comprehensive protocol) positions. During session 2, patients were evaluated with a 16-intercostal space US protocol in sitting (sitting simplified protocol) and left/right decubitus (decubitus simplified protocol) positions. The 16 intercostal spaces were chosen according to the prevalence of idiopathic pulmonary fibrosis-related changes on high-resolution CT. The sum of B-lines counted in each intercostal space formed the US scores for all 4 US protocols: supine/sitting and decubitus comprehensive US scores and sitting and decubitus simplified US scores. High-resolution CT-related Warrick scores (J Rheumatol 1991; 18:1520-1528) were compared to US scores. The duration of each protocol was recorded. A significant correlation was found between all US scores and Warrick scores and between simplified and corresponding comprehensive scores (P < .0001). Decubitus simplified US scores showed a slightly higher correlation with Warrick scores compared to sitting simplified US scores. Mean durations of decubitus and sitting simplified protocols were 4.76 and 6.20 minutes, respectively (P < .005). Simplified 16-intercostal space protocols correlated with comprehensive protocols and high-resolution CT findings in patients with idiopathic pulmonary fibrosis. The 16-intercostal space simplified protocol in the lateral decubitus position correlated better with high-resolution CT findings and was less time-consuming compared to the sitting position. © 2017 by the American Institute of Ultrasound in Medicine.

  5. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    NASA Astrophysics Data System (ADS)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  6. Use of radiography, computed tomography and magnetic resonance imaging for evaluation of navicular syndrome in the horse.

    PubMed

    Widmer, W R; Buckwalter, K A; Fessler, J F; Hill, M A; VanSickle, D C; Ivancevich, S

    2000-01-01

    Radiographic evaluation of navicular syndrome is problematic because of its inconsistent correlation with clinical signs. Scintigraphy often yields false positive and false negative results and diagnostic ultrasound is of limited value. Therefore, we assessed the use of computed tomography and magnetic resonance imaging in a horse with clinical and radiographic signs of navicular syndrome. Cadaver specimens were examined with spiral computed tomographic and high-field magnetic resonance scanners and images were correlated with pathologic findings. Radiographic changes consisted of bony remodeling, which included altered synovial fossae, increased medullary opacity, cyst formation and shape change. These osseous changes were more striking and more numerous on computed tomographic and magnetic resonance images. They were most clearly defined with computed tomography. Many osseous changes seen with computed tomography and magnetic resonance imaging were not radiographically evident. Histologically confirmed soft tissue alterations of the deep digital flexor tendon, impar ligament and marrow were identified with magnetic resonance imaging, but not with conventional radiography. Because of their multiplanar capability and tomographic nature, computed tomography and magnetic resonance imaging surpass conventional radiography for navicular imaging, facilitating earlier, more accurate diagnosis. Current advances in imaging technology should make these imaging modalities available to equine practitioners in the future.

  7. Reproducibility of abdominal fat assessment by ultrasound and computed tomography

    PubMed Central

    Mauad, Fernando Marum; Chagas-Neto, Francisco Abaeté; Benedeti, Augusto César Garcia Saab; Nogueira-Barbosa, Marcello Henrique; Muglia, Valdair Francisco; Carneiro, Antonio Adilton Oliveira; Muller, Enrico Mattana; Elias Junior, Jorge

    2017-01-01

    Objective: To test the accuracy and reproducibility of ultrasound and computed tomography (CT) for the quantification of abdominal fat in correlation with the anthropometric, clinical, and biochemical assessments. Materials and Methods: Using ultrasound and CT, we determined the thickness of subcutaneous and intra-abdominal fat in 101 subjects-of whom 39 (38.6%) were men and 62 (61.4%) were women-with a mean age of 66.3 years (60-80 years). The ultrasound data were correlated with the anthropometric, clinical, and biochemical parameters, as well as with the areas measured by abdominal CT. Results: Intra-abdominal thickness was the variable for which the correlation with the areas of abdominal fat was strongest (i.e., the correlation coefficient was highest). We also tested the reproducibility of ultrasound and CT for the assessment of abdominal fat and found that CT measurements of abdominal fat showed greater reproducibility, having higher intraobserver and interobserver reliability than had the ultrasound measurements. There was a significant correlation between ultrasound and CT, with a correlation coefficient of 0.71. Conclusion: In the assessment of abdominal fat, the intraobserver and interobserver reliability were greater for CT than for ultrasound, although both methods showed high accuracy and good reproducibility. PMID:28670024

  8. Reproducibility of abdominal fat assessment by ultrasound and computed tomography.

    PubMed

    Mauad, Fernando Marum; Chagas-Neto, Francisco Abaeté; Benedeti, Augusto César Garcia Saab; Nogueira-Barbosa, Marcello Henrique; Muglia, Valdair Francisco; Carneiro, Antonio Adilton Oliveira; Muller, Enrico Mattana; Elias Junior, Jorge

    2017-01-01

    To test the accuracy and reproducibility of ultrasound and computed tomography (CT) for the quantification of abdominal fat in correlation with the anthropometric, clinical, and biochemical assessments. Using ultrasound and CT, we determined the thickness of subcutaneous and intra-abdominal fat in 101 subjects-of whom 39 (38.6%) were men and 62 (61.4%) were women-with a mean age of 66.3 years (60-80 years). The ultrasound data were correlated with the anthropometric, clinical, and biochemical parameters, as well as with the areas measured by abdominal CT. Intra-abdominal thickness was the variable for which the correlation with the areas of abdominal fat was strongest (i.e., the correlation coefficient was highest). We also tested the reproducibility of ultrasound and CT for the assessment of abdominal fat and found that CT measurements of abdominal fat showed greater reproducibility, having higher intraobserver and interobserver reliability than had the ultrasound measurements. There was a significant correlation between ultrasound and CT, with a correlation coefficient of 0.71. In the assessment of abdominal fat, the intraobserver and interobserver reliability were greater for CT than for ultrasound, although both methods showed high accuracy and good reproducibility.

  9. Adolescent Sedentary Behaviors: Correlates Differ for Television Viewing and Computer Use

    PubMed Central

    Babey, Susan H.; Hastert, Theresa A.; Wolstein, Joelle

    2013-01-01

    Purpose Sedentary behavior is associated with obesity in youth. Understanding correlates of specific sedentary behaviors can inform the development of interventions to reduce sedentary time. The current research examines correlates of leisure computer use and television viewing among California adolescents. Methods Using data from the 2005 California Health Interview Survey (CHIS), we examined individual, family and environmental correlates of two sedentary behaviors among 4,029 adolescents: leisure computer use and television watching. Results Linear regression analyses adjusting for a range of factors indicated several differences in the correlates of television watching and computer use. Correlates of additional time spent watching television included male sex, American Indian and African American race, lower household income, lower levels of physical activity, lower parent educational attainment, and additional hours worked by parents. Correlates of a greater amount of time spent using the computer for fun included older age, Asian race, higher household income, lower levels of physical activity, less parental knowledge of free time activities, and living in neighborhoods with higher proportions of non-white residents and higher proportions of low-income residents. Only physical activity was associated similarly with both watching television and computer use. Conclusions These results suggest that correlates of time spent on television watching and leisure computer use are different. Reducing screen time is a potentially successful strategy in combating childhood obesity, and understanding differences in the correlates of different screen time behaviors can inform the development of more effective interventions to reduce sedentary time. PMID:23260837

  10. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise mapping application is composed of four principal modules: (1) pre-processing of raw data, (2) massive cross-correlation, (3) post-processing of correlation data based on computation of logarithmic energy ratio and (4) generation of source maps from post-processed data. Implementation of the solution posed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.

  11. CFD Assessment of Aerodynamic Degradation of a Subsonic Transport Due to Airframe Damage

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Pirzadeh, Shahyar Z.; Atkins, Harold L.; Viken, Sally A.; Morrison, Joseph H.

    2010-01-01

    A computational study is presented to assess the utility of two NASA unstructured Navier-Stokes flow solvers for capturing the degradation in static stability and aerodynamic performance of a NASA General Transport Model (GTM) due to airframe damage. The approach is to correlate computational results with a substantial subset of experimental data for the GTM undergoing progressive losses to the wing, vertical tail, and horizontal tail components. The ultimate goal is to advance the probability of inserting computational data into the creation of advanced flight simulation models of damaged subsonic aircraft in order to improve pilot training. Results presented in this paper demonstrate good correlations with slope-derived quantities, such as pitch static margin and static directional stability, and incremental rolling moment due to wing damage. This study further demonstrates that high fidelity Navier-Stokes flow solvers could augment flight simulation models with additional aerodynamic data for various airframe damage scenarios.

  12. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  13. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  14. Computation of canonical correlation and best predictable aspect of future for time series

    NASA Technical Reports Server (NTRS)

    Pourahmadi, Mohsen; Miamee, A. G.

    1989-01-01

    The canonical correlation between the (infinite) past and future of a stationary time series is shown to be the limit of the canonical correlation between the (infinite) past and (finite) future, and computation of the latter is reduced to a (generalized) eigenvalue problem involving (finite) matrices. This provides a convenient and essentially, finite-dimensional algorithm for computing canonical correlations and components of a time series. An upper bound is conjectured for the largest canonical correlation.

  15. Study of structural colour of Hebomoia glaucippe butterfly wing scales

    NASA Astrophysics Data System (ADS)

    Shur, V. Ya; Kuznetsov, D. K.; Pryakhina, V. I.; Kosobokov, M. S.; Zubarev, I. V.; Boymuradova, S. K.; Volchetskaya, K. V.

    2017-10-01

    Structural colours of Hebomoia glaucippe butterfly wing scales have been studied experimentally using high resolution scanning electron microscopy. Visualization of scales structures and computer simulation allowed distinguishing correlation between nanostructures on the scales and their colour.

  16. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems

    PubMed Central

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach’s Youth Self-Report (YSR). Findings The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students’ place of living and their parents’ job, and using computer games. Conclusion Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents. PMID:24494157

  17. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems.

    PubMed

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach's Youth Self-Report (YSR). The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students' place of living and their parents' job, and using computer games. Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents.

  18. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Vertical facial height and its correlation with facial width and depth: Three dimensional cone beam computed tomography evaluation based on dry skulls.

    PubMed

    Wang, Ming Feng; Otsuka, Takero; Akimoto, Susumu; Sato, Sadao

    2013-01-01

    The aim of the present study was to evaluate how vertical facial height correlates with mandibular plane angle, facial width and depth from a three dimensional (3D) viewing angle. In this study 3D cephalometric landmarks were identified and measurements from 43 randomly selected cone beam computed tomography (CBCT) images of dry skulls from the Weisbach collection of Vienna Natural History Museum were analyzed. Pearson correlation coefficients of facial height measurements and mandibular plane angle and the correlation coefficients of height-width and height-depth were calculated, respectively. The mandibular plane angle (MP-SN) significantly correlated with ramus height (Co-Go) and posterior facial height (PFH) but not with anterior lower face height (ALFH) or anterior total face height (ATFH). The ALFH and ATFH showed significant correlation with anterior cranial base length (S-N), whereas PFH showed significant correlation with the mandible (S-B) and maxilla (S-A) anteroposterior position. High or low mandibular plane angle might not necessarily be accompanied by long or short anterior face height, respectively. The PFH rather than AFH is assumed to play a key role in the vertical facial type whereas AFH seems to undergo relatively intrinsic growth.

  20. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  1. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  2. Certification in Structural Health Monitoring Systems

    DTIC Science & Technology

    2011-09-01

    validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE

  3. Evaluation of trabecular bone patterns on dental radiographic images: influence of cortical bone

    NASA Astrophysics Data System (ADS)

    Amouriq, Yves; Evenou, Pierre; Arlicot, Aurore; Normand, Nicolas; Layrolle, Pierre; Weiss, Pierre; Guédon, Jean-Pierre

    2010-03-01

    For some authors trabecular bone is highly visible in intraoral radiographs. For other authors, the observed intrabony trabecular pattern is a representation of only the endosteal surface of cortical bone, not of intermedullary striae. The purpose of this preliminary study was to investigate the true anatomical structures that are visible in routine dental radiographs and classically denoted trabecular bone. This is a major point for bone texture analysis on radiographs. Computed radiography (CR) images of dog mandible section in molar region were compared with simulations calculated from high-resolution micro-CT volumes. Calculated simulations were obtained using the Mojette Transform. By digitally editing the CT volume, the simulations were separated into trabecular and cortical components into a region of interest. Different images were compared and correlated, some bone micro-architecture parameters calculated. A high correlation was found between computed radiographs and calculated simulations from micro-CT. The Mojette transform was successful to obtain high quality images. Cortical bone did not contribute to change in a major way simulated images. These first results imply that intrabony trabecular pattern observed on radiographs can not only be a representation of the cortical bone endosteal surface and that trabecular bone is highly visible in intraoral radiographs.

  4. Precision measurements and computations of transition energies in rotationally cold triatomic hydrogen ions up to the midvisible spectral range.

    PubMed

    Pavanello, Michele; Adamowicz, Ludwik; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Polyansky, Oleg L; Tennyson, Jonathan; Szidarovszky, Tamás; Császár, Attila G; Berg, Max; Petrignani, Annemieke; Wolf, Andreas

    2012-01-13

    First-principles computations and experimental measurements of transition energies are carried out for vibrational overtone lines of the triatomic hydrogen ion H(3)(+) corresponding to floppy vibrations high above the barrier to linearity. Action spectroscopy is improved to detect extremely weak visible-light spectral lines on cold trapped H(3)(+) ions. A highly accurate potential surface is obtained from variational calculations using explicitly correlated Gaussian wave function expansions. After nonadiabatic corrections, the floppy H(3)(+) vibrational spectrum is reproduced at the 0.1 cm(-1) level up to 16600 cm(-1).

  5. An exploratory investigation of various assessment instruments as correlates of complex visual monitoring performance.

    DOT National Transportation Integrated Search

    1980-10-01

    The present study examined a variety of possible predictors of complex monitoring performance. The criterion task was designed to resemble that of a highly automated air traffic control radar system containing computer-generated alphanumeric displays...

  6. Amoeba-based computing for traveling salesman problem: long-term correlations between spatially separated individual cells of Physarum polycephalum.

    PubMed

    Zhu, Liping; Aono, Masashi; Kim, Song-Ju; Hara, Masahiko

    2013-04-01

    A single-celled, multi-nucleated amoeboid organism, a plasmodium of the true slime mold Physarum polycephalum, can perform sophisticated computing by exhibiting complex spatiotemporal oscillatory dynamics while deforming its amorphous body. We previously devised an "amoeba-based computer (ABC)" to quantitatively evaluate the optimization capability of the amoeboid organism in searching for a solution to the traveling salesman problem (TSP) under optical feedback control. In ABC, the organism changes its shape to find a high quality solution (a relatively shorter TSP route) by alternately expanding and contracting its pseudopod-like branches that exhibit local photoavoidance behavior. The quality of the solution serves as a measure of the optimality of which the organism maximizes its global body area (nutrient absorption) while minimizing the risk of being illuminated (exposure to aversive stimuli). ABC found a high quality solution for the 8-city TSP with a high probability. However, it remains unclear whether intracellular communication among the branches of the organism is essential for computing. In this study, we conducted a series of control experiments using two individual cells (two single-celled organisms) to perform parallel searches in the absence of intercellular communication. We found that ABC drastically lost its ability to find a solution when it used two independent individuals. However, interestingly, when two individuals were prepared by dividing one individual, they found a solution for a few tens of minutes. That is, the two divided individuals remained correlated even though they were spatially separated. These results suggest the presence of a long-term memory in the intrinsic dynamics of this organism and its significance in performing sophisticated computing. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Assessment of cardiovascular risk profile based on measurement of tophus volume in patients with gout.

    PubMed

    Lee, Kyung-Ann; Ryu, Se-Ri; Park, Seong-Jun; Kim, Hae-Rim; Lee, Sang-Heon

    2018-05-01

    Hyperuricemia and gout are associated with increased risk of cardiovascular disease and metabolic syndrome. The aim of this study was to evaluate the correlation of total tophus volumes, measured using dual-energy computed tomography, with cardiovascular risk and the presence of metabolic syndrome. Dual-energy computed tomography datasets from 91 patients with a diagnosis of gout were analyzed retrospectively. Patients who received urate lowering therapy were excluded to avoid the effect on tophus volume. The total volumes of tophaceous deposition were quantified using automated volume assessment software. The 10-year cardiovascular risk using the Framingham Risk Score and metabolic syndrome based on the Third Adult Treatment Panel criteria were estimated. Fifty-five and 36 patients with positive and negative dual-energy computed tomography results, respectively, were assessed. Patients with positive dual-energy computed tomography results showed significantly higher systolic blood pressure, diastolic blood pressure, fasting glucose, and higher prevalence of chronic kidney disease, compared with those with negative dual-energy computed tomography results. The total tophus volumes were significantly correlated with the Framingham Risk Score, and the number of metabolic syndrome components (r = 0.22 and p = 0.036 and r = 0.373 and p < 0.001, respectively). The total tophus volume was one of the independent prognostic factors for the Framingham Risk Score in a multivariate analysis. This study showed the correlation of total tophus volumes with cardiovascular risk and metabolic syndrome-related comorbidities. A high urate burden could affect unfavorable cardiovascular profiles.

  8. Computational measurement of joint space width and structural parameters in normal hips.

    PubMed

    Nishii, Takashi; Shiomi, Toshiyuki; Sakai, Takashi; Takao, Masaki; Yoshikawa, Hideki; Sugano, Nobuhiko

    2012-05-01

    Joint space width (JSW) of hip joints on radiographs in normal population may vary by related factors, but previous investigations were insufficient due to limitations of sources of radiographs, inclusion of subjects with osteoarthritis, and manual measurement techniques. We investigated influential factors on JSW using semiautomatic computational software on pelvic radiographs in asymptomatic subjects without radiological osteoarthritic findings. Global and local JSW at the medial, middle, and lateral compartments, and the hip structural parameters were measured in asymptomatic, normal 150 cases (300 hips), using a customized computational software. Reliability of measurement in global and local JSWs was high with intraobserver reproducibility (intraclass correlation coefficient) ranging from 0.957 to 0.993 and interobserver reproducibility ranging from 0.925 to 0.985. There were significant differences among three local JSWs, with the largest JSW at the lateral compartment. Global and medial local JSWs were significantly larger in the right hip, and global, medial and middle local JSWs were significantly smaller in women. Global and local JSWs were inversely correlated with CE angle and positively correlated with horizontal distance of the head center, but not correlated with body mass index in men and women. They were positively correlated with age and inversely correlated with vertical distance of the head center only in men. There were interindividual variations of JSW in normal population, depending on sites of the weight-bearing area, side, gender, age, and hip structural parameters. For accurate diagnosis and assessment of hip osteoarthritis, consideration of those influential factors other than degenerative change is important.

  9. Assessment of lumbosacral kyphosis in spondylolisthesis: a computer-assisted reliability study of six measurement techniques

    PubMed Central

    Glavas, Panagiotis; Mac-Thiong, Jean-Marc; Parent, Stefan; de Guise, Jacques A.

    2008-01-01

    Although recognized as an important aspect in the management of spondylolisthesis, there is no consensus on the most reliable and optimal measure of lumbosacral kyphosis (LSK). Using a custom computer software, four raters evaluated 60 standing lateral radiographs of the lumbosacral spine during two sessions at a 1-week interval. The sample size consisted of 20 normal, 20 low and 20 high grade spondylolisthetic subjects. Six parameters were included for analysis: Boxall’s slip angle, Dubousset’s lumbosacral angle (LSA), the Spinal Deformity Study Group’s (SDSG) LSA, dysplastic SDSG LSA, sagittal rotation (SR), kyphotic Cobb angle (k-Cobb). Intra- and inter-rater reliability for all parameters was assessed using intra-class correlation coefficients (ICC). Correlations between parameters and slip percentage were evaluated with Pearson coefficients. The intra-rater ICC’s for all the parameters ranged between 0.81 and 0.97 and the inter-rater ICC’s were between 0.74 and 0.98. All parameters except sagittal rotation showed a medium to large correlation with slip percentage. Dubousset’s LSA and the k-Cobb showed the largest correlations (r = −0.78 and r = −0.50, respectively). SR was associated with the weakest correlation (r = −0.10). All other parameters had medium correlations with percent slip (r = 0.31–0.43). All measurement techniques provided excellent inter- and intra-rater reliability. Dubousset’s LSA showed the strongest correlation with slip grade. This parameter can be used in the clinical setting with PACS software capabilities to assess LSK. A computer-assisted technique is recommended in order to increase the reliability of the measurement of LSK in spondylolisthesis. PMID:19015898

  10. Recurrence formulas for fully exponentially correlated four-body wave functions

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.

    2009-03-01

    Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.

  11. Solar Terrestrial Influences on the D Region as Shown by the Level of Atmospheric Radio Noise

    NASA Technical Reports Server (NTRS)

    Satori, G.; Schaning, B.

    1984-01-01

    Measurements of the integrated atmospheric radio noise field strength at 27 kHz, used here, were made from 1965 to 1975 at Uppsala, Kuhlungsborn, and Prague-Panska Ves. The large scale meteorological situation was considered by comparing solar disturbed and undisturbed periods under similar weather situations. In order to show the effects of the precipitating high energy particle (HEP) flux and of the Forbush decrease on the noise level between pairs of stations were computed as deviations from the monthly median. Delta E (dB), day by day for all six periods was studied. The correlation coefficients for noon as well as for night values were computed. The correlation coefficients were compared with those for solar undisturbed periods.

  12. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  13. A Computer Program for the Calculation of Three-Dimensional Transonic Nacelle/Inlet Flowfields

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Atta, E. H.

    1983-01-01

    A highly efficient computer analysis was developed for predicting transonic nacelle/inlet flowfields. This algorithm can compute the three dimensional transonic flowfield about axisymmetric (or asymmetric) nacelle/inlet configurations at zero or nonzero incidence. The flowfield is determined by solving the full-potential equation in conservative form on a body-fitted curvilinear computational mesh. The difference equations are solved using the AF2 approximate factorization scheme. This report presents a discussion of the computational methods used to both generate the body-fitted curvilinear mesh and to obtain the inviscid flow solution. Computed results and correlations with existing methods and experiment are presented. Also presented are discussions on the organization of the grid generation (NGRIDA) computer program and the flow solution (NACELLE) computer program, descriptions of the respective subroutines, definitions of the required input parameters for both algorithms, a brief discussion on interpretation of the output, and sample cases to illustrate application of the analysis.

  14. Predicting through-focus visual acuity with the eye's natural aberrations.

    PubMed

    Kingston, Amanda C; Cox, Ian G

    2013-10-01

    To develop a predictive optical modeling process that utilizes individual computer eye models along with a novel through-focus image quality metric. Individual eye models were implemented in optical design software (Zemax, Bellevue, WA) based on evaluation of ocular aberrations, pupil diameter, visual acuity, and accommodative response of 90 subjects (180 eyes; 24-63 years of age). Monocular high-contrast minimum angle of resolution (logMAR) acuity was assessed at 6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, 33 cm, 28 cm, and 25 cm. While the subject fixated on the lowest readable line of acuity, total ocular aberrations and pupil diameter were measured three times each using the Complete Ophthalmic Analysis System (COAS HD VR) at each distance. A subset of 64 mature presbyopic eyes was used to predict the clinical logMAR acuity performance of five novel multifocal contact lens designs. To validate predictability of the design process, designs were manufactured and tested clinically on a population of 24 mature presbyopes (having at least +1.50 D spectacle add at 40 cm). Seven object distances were used in the validation study (6 m, 2 m, 1 m, 67 cm, 50 cm, 40 cm, and 25 cm) to measure monocular high-contrast logMAR acuity. Baseline clinical through-focus logMAR was shown to correlate highly (R² = 0.85) with predicted logMAR from individual eye models. At all object distances, each of the five multifocal lenses showed less than one line difference, on average, between predicted and clinical normalized logMAR acuity. Correlation showed R² between 0.90 and 0.97 for all multifocal designs. Computer-based models that account for patient's aberrations, pupil diameter changes, and accommodative amplitude can be used to predict the performance of contact lens designs. With this high correlation (R² ≥ 0.90) and high level of predictability, more design options can be explored in the computer to optimize performance before a lens is manufactured and tested clinically.

  15. 35 years of Ambient Noise: Can We Evidence Daily to Climatic Relative Velocity Changes ?

    NASA Astrophysics Data System (ADS)

    Lecocq, T.; Pedersen, H.; Brenguier, F.; Stammler, K.

    2014-12-01

    The broadband Grafenberg array (Germany) has been installed in 1976 and, thanks to visionary scientists and network maintainers, the continuous data acquired has been preserved until today. Using state of the art pre-processing and cross-correlation techniques, we are able to extract cross-correlation functions (CCF) between sensor pairs. It has been shown recently that, provided enough computation power is available, there is no need to define a reference CCF to compare all days to. Indeed, one can compare each day to all days, computing the "all-doublet". The number of calculations becomes huge (N vs ref = N calculations, N vs N= N*N), but the result, once inverted, is way more stable because of the N observations per day. This analysis has been done on a parallelized version of MSNoise (http://msnoise.org), running on the VEGA cluster hosted at the Université Libre de Bruxelles (ULB, Belgium). Here, we present preliminary results of the analysis of two stations, GRA1 and GRA2, the first two stations installed in March 1976. The interferogram (observation of the CCF through time, see Figure) already shows interesting features in the ballistic wave shape, highly correlated to the seasons. A reasonably high correlation can still be seen outside the ballistic arrival, after +-5 second lag time. The lag times between 5 and 25 seconds are then used to compute the dv/v using the all-doublet method. We expect to evidence daily to seasonal, or even to longer period dv/v variations and/or noise source position changes using this method. Once done with 1 sensor pair, the full data of the Grafenberg array will be used to enhance the resolution even more.

  16. Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering.

    PubMed

    Groen, Nathalie; Guvendiren, Murat; Rabitz, Herschel; Welsh, William J; Kohn, Joachim; de Boer, Jan

    2016-04-01

    The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. In this opinion paper, we postulate that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. Copyright © 2016. Published by Elsevier Ltd.

  17. Studying flow close to an interface by total internal reflection fluorescence cross-correlation spectroscopy: Quantitative data analysis

    NASA Astrophysics Data System (ADS)

    Schmitz, R.; Yordanov, S.; Butt, H. J.; Koynov, K.; Dünweg, B.

    2011-12-01

    Total internal reflection fluorescence cross-correlation spectroscopy (TIR-FCCS) has recently [S. Yordanov , Optics ExpressOPEXFF1094-408710.1364/OE.17.021149 17, 21149 (2009)] been established as an experimental method to probe hydrodynamic flows near surfaces, on length scales of tens of nanometers. Its main advantage is that fluorescence occurs only for tracer particles close to the surface, thus resulting in high sensitivity. However, the measured correlation functions provide only rather indirect information about the flow parameters of interest, such as the shear rate and the slip length. In the present paper, we show how to combine detailed and fairly realistic theoretical modeling of the phenomena by Brownian dynamics simulations with accurate measurements of the correlation functions, in order to establish a quantitative method to retrieve the flow properties from the experiments. First, Brownian dynamics is used to sample highly accurate correlation functions for a fixed set of model parameters. Second, these parameters are varied systematically by means of an importance-sampling Monte Carlo procedure in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for massively parallel computers, which allows us to do the data analysis within moderate computing times. The method is applied to flow near a hydrophilic surface, where the slip length is observed to be smaller than 10nm, and, within the limitations of the experiments and the model, indistinguishable from zero.

  18. Digital tomosynthesis and high resolution computed tomography as clinical tools for vertebral endplate topography measurements: Comparison with microcomputed tomography.

    PubMed

    Oravec, Daniel; Quazi, Abrar; Xiao, Angela; Yang, Ellen; Zauel, Roger; Flynn, Michael J; Yeni, Yener N

    2015-12-01

    Endplate morphology is understood to play an important role in the mechanical behavior of vertebral bone as well as degenerative processes in spinal tissues; however, the utility of clinical imaging modalities in assessment of the vertebral endplate has been limited. The objective of this study was to evaluate the ability of two clinical imaging modalities (digital tomosynthesis, DTS; high resolution computed tomography, HRCT) to assess endplate topography by correlating the measurements to a microcomputed tomography (μCT) standard. DTS, HRCT, and μCT images of 117 cadaveric thoracolumbar vertebrae (T10-L1; 23 male, 19 female; ages 36-100 years) were segmented, and inferior and superior endplate surface topographical distribution parameters were calculated. Both DTS and HRCT showed statistically significant correlations with μCT approaching a moderate level of correlation at the superior endplate for all measured parameters (R(2)Adj=0.19-0.57), including averages, variability, and higher order statistical moments. Correlation of average depths at the inferior endplate was comparable to the superior case for both DTS and HRCT (R(2)Adj=0.14-0.51), while correlations became weak or nonsignificant for higher moments of the topography distribution. DTS was able to capture variations in the endplate topography to a slightly better extent than HRCT, and taken together with the higher speed and lower radiation cost of DTS than HRCT, DTS appears preferable for endplate measurements. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Fast computation of molecular random phase approximation correlation energies using resolution of the identity and imaginary frequency integration

    NASA Astrophysics Data System (ADS)

    Eshuis, Henk; Yarkony, Julian; Furche, Filipp

    2010-06-01

    The random phase approximation (RPA) is an increasingly popular post-Kohn-Sham correlation method, but its high computational cost has limited molecular applications to systems with few atoms. Here we present an efficient implementation of RPA correlation energies based on a combination of resolution of the identity (RI) and imaginary frequency integration techniques. We show that the RI approximation to four-index electron repulsion integrals leads to a variational upper bound to the exact RPA correlation energy if the Coulomb metric is used. Auxiliary basis sets optimized for second-order Møller-Plesset (MP2) calculations are well suitable for RPA, as is demonstrated for the HEAT [A. Tajti et al., J. Chem. Phys. 121, 11599 (2004)] and MOLEKEL [F. Weigend et al., Chem. Phys. Lett. 294, 143 (1998)] benchmark sets. Using imaginary frequency integration rather than diagonalization to compute the matrix square root necessary for RPA, evaluation of the RPA correlation energy requires O(N4 log N) operations and O(N3) storage only; the price for this dramatic improvement over existing algorithms is a numerical quadrature. We propose a numerical integration scheme that is exact in the two-orbital case and converges exponentially with the number of grid points. For most systems, 30-40 grid points yield μH accuracy in triple zeta basis sets, but much larger grids are necessary for small gap systems. The lowest-order approximation to the present method is a post-Kohn-Sham frequency-domain version of opposite-spin Laplace-transform RI-MP2 [J. Jung et al., Phys. Rev. B 70, 205107 (2004)]. Timings for polyacenes with up to 30 atoms show speed-ups of two orders of magnitude over previous implementations. The present approach makes it possible to routinely compute RPA correlation energies of systems well beyond 100 atoms, as is demonstrated for the octapeptide angiotensin II.

  20. Fast computation of molecular random phase approximation correlation energies using resolution of the identity and imaginary frequency integration.

    PubMed

    Eshuis, Henk; Yarkony, Julian; Furche, Filipp

    2010-06-21

    The random phase approximation (RPA) is an increasingly popular post-Kohn-Sham correlation method, but its high computational cost has limited molecular applications to systems with few atoms. Here we present an efficient implementation of RPA correlation energies based on a combination of resolution of the identity (RI) and imaginary frequency integration techniques. We show that the RI approximation to four-index electron repulsion integrals leads to a variational upper bound to the exact RPA correlation energy if the Coulomb metric is used. Auxiliary basis sets optimized for second-order Møller-Plesset (MP2) calculations are well suitable for RPA, as is demonstrated for the HEAT [A. Tajti et al., J. Chem. Phys. 121, 11599 (2004)] and MOLEKEL [F. Weigend et al., Chem. Phys. Lett. 294, 143 (1998)] benchmark sets. Using imaginary frequency integration rather than diagonalization to compute the matrix square root necessary for RPA, evaluation of the RPA correlation energy requires O(N(4) log N) operations and O(N(3)) storage only; the price for this dramatic improvement over existing algorithms is a numerical quadrature. We propose a numerical integration scheme that is exact in the two-orbital case and converges exponentially with the number of grid points. For most systems, 30-40 grid points yield muH accuracy in triple zeta basis sets, but much larger grids are necessary for small gap systems. The lowest-order approximation to the present method is a post-Kohn-Sham frequency-domain version of opposite-spin Laplace-transform RI-MP2 [J. Jung et al., Phys. Rev. B 70, 205107 (2004)]. Timings for polyacenes with up to 30 atoms show speed-ups of two orders of magnitude over previous implementations. The present approach makes it possible to routinely compute RPA correlation energies of systems well beyond 100 atoms, as is demonstrated for the octapeptide angiotensin II.

  1. Challenges of Big Data Analysis.

    PubMed

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-06-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions.

  2. Challenges of Big Data Analysis

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han

    2014-01-01

    Big Data bring new opportunities to modern society and challenges to data scientists. On one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity, and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This article gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures. We also provide various new perspectives on the Big Data analysis and computation. In particular, we emphasize on the viability of the sparsest solution in high-confidence set and point out that exogeneous assumptions in most statistical methods for Big Data can not be validated due to incidental endogeneity. They can lead to wrong statistical inferences and consequently wrong scientific conclusions. PMID:25419469

  3. Direct biomechanical modeling of trabecular bone using a nonlinear manifold-based volumetric representation

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.

  4. Covariance analyses of satellite-derived mesoscale wind fields

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Vonder Haar, T. H.

    1979-01-01

    Statistical structure functions have been computed independently for nine satellite-derived mesoscale wind fields that were obtained on two different days. Small cumulus clouds were tracked at 5 min intervals, but since these clouds occurred primarily in the warm sectors of midlatitude cyclones the results cannot be considered representative of the circulations within cyclones in general. The field structure varied considerably with time and was especially affected if mesoscale features were observed. The wind fields on the 2 days studied were highly anisotropic with large gradients in structure occurring approximately normal to the mean flow. Structure function calculations for the combined set of satellite winds were used to estimate random error present in the fields. It is concluded for these data that the random error in vector winds derived from cumulus cloud tracking using high-frequency satellite data is less than 1.75 m/s. Spatial correlation functions were also computed for the nine data sets. Normalized correlation functions were considerably different for u and v components and decreased rapidly as data point separation increased for both components. The correlation functions for transverse and longitudinal components decreased less rapidly as data point separation increased.

  5. Brief communication: patient satisfaction with the use of tablet computers: a pilot study in two outpatient home dialysis clinics.

    PubMed

    Schick-Makaroff, Kara; Molzahn, Anita

    2014-01-01

    Electronic capture of patients' reports of their health is significant in clinical nephrology research because health-related quality of life (HRQOL) for patients with end-stage renal disease is compromised and assessment by patients of their HRQOL in practice is relatively uncommon. The purpose of this study was to evaluate patient satisfaction with and time involved in administering HRQOL and symptom assessment measures using tablet computers in two outpatient home dialysis clinics. A cross-sectional observational study design was employed. The study was conducted in two home dialysis clinics. Fifty-six patients participated in the study; 35 males (63%) and 21 females (37%) with a mean age of 66 ± 12 (36-90 years old) were included. Forty-nine participants were on peritoneal dialysis (87%), 6 on home hemodialysis (11%), and 1 on nocturnal home hemodialysis (2%). Measures included the Kidney Disease Quality of Life-36 (KDQOL-36), the Edmonton Symptom Assessment Scale (ESAS) and Participant's Level of Satisfaction in Using a Tablet Computer. Using a tablet computer, participants completed the three measures. Descriptive statistics and bivariate correlations were calculated. Participants' satisfaction with use of the tablet computer was high; 66% were "very satisfied", 7% "satisfied", 2% "slightly satisfied", and 18% "neutral". On the 7-point Likert-type scale, the mean satisfaction score was 5.11 (SD = 1.6). Mean time to complete the measures was: Level of Satisfaction 1.15 minutes (SD = 0.41), ESAS 2.55 minutes (SD = 1.04), and KDQOL 9.56 minutes (SD = 2.03); the mean time to complete all three instruments was 13.19 minutes (SD = 2.42). There were no significant correlations between level of satisfaction and age, gender, HRQOL, time taken to complete surveys, computer experience, or comfort with technology. Comfort with technology and computer experience were highly correlated, r = .7, p (one-tailed) < 0.01. Limitations include lack of generalizability because of a small self-selected sample of relatively healthy patients and a lack of psychometric testing on the measure of satisfaction. Participants were satisfied with the platform and the time involved for completion of instruments was modest. Routine use of HRQOL measures for clinical purposes may be facilitated through use of tablet computers.

  6. A satellite snow depth multi-year average derived from SSM/I for the high latitude regions

    USGS Publications Warehouse

    Biancamaria, S.; Mognard, N.M.; Boone, A.; Grippa, M.; Josberger, E.G.

    2008-01-01

    The hydrological cycle for high latitude regions is inherently linked with the seasonal snowpack. Thus, accurately monitoring the snow depth and the associated aerial coverage are critical issues for monitoring the global climate system. Passive microwave satellite measurements provide an optimal means to monitor the snowpack over the arctic region. While the temporal evolution of snow extent can be observed globally from microwave radiometers, the determination of the corresponding snow depth is more difficult. A dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from Special Sensor Microwave/Imager (SSM/I) brightness temperatures and was validated over the U.S. Great Plains and Western Siberia. The purpose of this study is to assess the dynamic algorithm performance over the entire high latitude (land) region by computing a snow depth multi-year field for the time period 1987-1995. This multi-year average is compared to the Global Soil Wetness Project-Phase2 (GSWP2) snow depth computed from several state-of-the-art land surface schemes and averaged over the same time period. The multi-year average obtained by the dynamic algorithm is in good agreement with the GSWP2 snow depth field (the correlation coefficient for January is 0.55). The static algorithm, which assumes a constant snow grain size in space and time does not correlate with the GSWP2 snow depth field (the correlation coefficient with GSWP2 data for January is - 0.03), but exhibits a very high anti-correlation with the NCEP average January air temperature field (correlation coefficient - 0.77), the deepest satellite snow pack being located in the coldest regions, where the snow grain size may be significantly larger than the average value used in the static algorithm. The dynamic algorithm performs better over Eurasia (with a correlation coefficient with GSWP2 snow depth equal to 0.65) than over North America (where the correlation coefficient decreases to 0.29). ?? 2007 Elsevier Inc. All rights reserved.

  7. Remote Measurement of Heat Flux from Power Plant Cooling Lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, Alfred J.; Kurzeja, Robert J.; Villa-Aleman, Eliel

    2013-06-01

    Laboratory experiments have demonstrated a correlation between the rate of heat loss q" from an experimental fluid to the air above and the standard deviation σ of the thermal variability in images of the fluid surface. These experimental results imply that q" can be derived directly from thermal imagery by computing σ. This paper analyses thermal imagery collected over two power plant cooling lakes to determine if the same relationship exists. Turbulent boundary layer theory predicts a linear relationship between q" and σ when both forced (wind driven) and free (buoyancy driven) convection are present. Datasets derived from ground- andmore » helicopter-based imagery collections had correlation coefficients between σ and q" of 0.45 and 0.76, respectively. Values of q" computed from a function of σ and friction velocity u* derived from turbulent boundary layer theory had higher correlations with measured values of q" (0.84 and 0.89). Finally, this research may be applicable to the problem of calculating losses of heat from the ocean to the atmosphere during high-latitude cold-air outbreaks because it does not require the information typically needed to compute sensible, evaporative, and thermal radiation energy losses to the atmosphere.« less

  8. Computer aided diagnosis system for Alzheimer disease using brain diffusion tensor imaging features selected by Pearson's correlation.

    PubMed

    Graña, M; Termenon, M; Savio, A; Gonzalez-Pinto, A; Echeveste, J; Pérez, J M; Besga, A

    2011-09-20

    The aim of this paper is to obtain discriminant features from two scalar measures of Diffusion Tensor Imaging (DTI) data, Fractional Anisotropy (FA) and Mean Diffusivity (MD), and to train and test classifiers able to discriminate Alzheimer's Disease (AD) patients from controls on the basis of features extracted from the FA or MD volumes. In this study, support vector machine (SVM) classifier was trained and tested on FA and MD data. Feature selection is done computing the Pearson's correlation between FA or MD values at voxel site across subjects and the indicative variable specifying the subject class. Voxel sites with high absolute correlation are selected for feature extraction. Results are obtained over an on-going study in Hospital de Santiago Apostol collecting anatomical T1-weighted MRI volumes and DTI data from healthy control subjects and AD patients. FA features and a linear SVM classifier achieve perfect accuracy, sensitivity and specificity in several cross-validation studies, supporting the usefulness of DTI-derived features as an image-marker for AD and to the feasibility of building Computer Aided Diagnosis systems for AD based on them. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Computational Modeling of Arc-Slag Interaction in DC Furnaces

    NASA Astrophysics Data System (ADS)

    Reynolds, Quinn G.

    2017-02-01

    The plasma arc is central to the operation of the direct-current arc furnace, a unit operation commonly used in high-temperature processing of both primary ores and recycled metals. The arc is a high-velocity, high-temperature jet of ionized gas created and sustained by interactions among the thermal, momentum, and electromagnetic fields resulting from the passage of electric current. In addition to being the primary source of thermal energy, the arc jet also couples mechanically with the bath of molten process material within the furnace, causing substantial splashing and stirring in the region in which it impinges. The arc's interaction with the molten bath inside the furnace is studied through use of a multiphase, multiphysics computational magnetohydrodynamic model developed in the OpenFOAM® framework. Results from the computational solver are compared with empirical correlations that account for arc-slag interaction effects.

  10. Comparative Validity and Reproducibility Study of Various Landmark-Oriented Reference Planes in 3-Dimensional Computed Tomographic Analysis for Patients Receiving Orthognathic Surgery

    PubMed Central

    Lin, Hsiu-Hsia; Chuang, Ya-Fang; Weng, Jing-Ling; Lo, Lun-Jou

    2015-01-01

    Background Three-dimensional computed tomographic imaging has become popular in clinical evaluation, treatment planning, surgical simulation, and outcome assessment for maxillofacial intervention. The purposes of this study were to investigate whether there is any correlation among landmark-based horizontal reference planes and to validate the reproducibility and reliability of landmark identification. Materials and Methods Preoperative and postoperative cone-beam computed tomographic images of patients who had undergone orthognathic surgery were collected. Landmark-oriented reference planes including the Frankfort horizontal plane (FHP) and the lateral semicircular canal plane (LSP) were established. Four FHPs were defined by selecting 3 points from the orbitale, porion, or midpoint of paired points. The LSP passed through both the lateral semicircular canal points and nasion. The distances between the maxillary or mandibular teeth and the reference planes were measured, and the differences between the 2 sides were calculated and compared. The precision in locating the landmarks was evaluated by performing repeated tests, and the intraobserver reproducibility and interobserver reliability were assessed. Results A total of 30 patients with facial deformity and malocclusion—10 patients with facial symmetry, 10 patients with facial asymmetry, and 10 patients with cleft lip and palate—were recruited. Comparing the differences among the 5 reference planes showed no statistically significant difference among all patient groups. Regarding intraobserver reproducibility, the mean differences in the 3 coordinates varied from 0 to 0.35 mm, with correlation coefficients between 0.96 and 1.0, showing high correlation between repeated tests. Regarding interobserver reliability, the mean differences among the 3 coordinates varied from 0 to 0.47 mm, with correlation coefficients between 0.88 and 1.0, exhibiting high correlation between the different examiners. Conclusions The 5 horizontal reference planes were reliable and comparable for 3D craniomaxillofacial analysis. These reference planes were useful in standardizing the orientation of 3D skull models. PMID:25668209

  11. Comparative validity and reproducibility study of various landmark-oriented reference planes in 3-dimensional computed tomographic analysis for patients receiving orthognathic surgery.

    PubMed

    Lin, Hsiu-Hsia; Chuang, Ya-Fang; Weng, Jing-Ling; Lo, Lun-Jou

    2015-01-01

    Three-dimensional computed tomographic imaging has become popular in clinical evaluation, treatment planning, surgical simulation, and outcome assessment for maxillofacial intervention. The purposes of this study were to investigate whether there is any correlation among landmark-based horizontal reference planes and to validate the reproducibility and reliability of landmark identification. Preoperative and postoperative cone-beam computed tomographic images of patients who had undergone orthognathic surgery were collected. Landmark-oriented reference planes including the Frankfort horizontal plane (FHP) and the lateral semicircular canal plane (LSP) were established. Four FHPs were defined by selecting 3 points from the orbitale, porion, or midpoint of paired points. The LSP passed through both the lateral semicircular canal points and nasion. The distances between the maxillary or mandibular teeth and the reference planes were measured, and the differences between the 2 sides were calculated and compared. The precision in locating the landmarks was evaluated by performing repeated tests, and the intraobserver reproducibility and interobserver reliability were assessed. A total of 30 patients with facial deformity and malocclusion--10 patients with facial symmetry, 10 patients with facial asymmetry, and 10 patients with cleft lip and palate--were recruited. Comparing the differences among the 5 reference planes showed no statistically significant difference among all patient groups. Regarding intraobserver reproducibility, the mean differences in the 3 coordinates varied from 0 to 0.35 mm, with correlation coefficients between 0.96 and 1.0, showing high correlation between repeated tests. Regarding interobserver reliability, the mean differences among the 3 coordinates varied from 0 to 0.47 mm, with correlation coefficients between 0.88 and 1.0, exhibiting high correlation between the different examiners. The 5 horizontal reference planes were reliable and comparable for 3D craniomaxillofacial analysis. These reference planes were useful in standardizing the orientation of 3D skull models.

  12. A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network

    PubMed Central

    Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing

    2018-01-01

    Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373

  13. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  14. The effect of basis set and exchange-correlation functional on time-dependent density functional theory calculations within the Tamm-Dancoff approximation of the x-ray emission spectroscopy of transition metal complexes.

    PubMed

    Roper, Ian P E; Besley, Nicholas A

    2016-03-21

    The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.

  15. A new procedure of modal parameter estimation for high-speed digital image correlation

    NASA Astrophysics Data System (ADS)

    Huňady, Róbert; Hagara, Martin

    2017-09-01

    The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the verification of the obtained results a verification experiment was conducted during which the vibration responses were measured using conventional acceleration sensors. In both cases MIMO analysis was realized.

  16. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data.

    PubMed

    Duan, L L; Szczesniak, R D; Wang, X

    2017-11-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization.

  17. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data

    PubMed Central

    Duan, L. L.; Szczesniak, R. D.; Wang, X.

    2018-01-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization. PMID:29576735

  18. Accounting for irregular support in spatial interpolation - analysing the effect of using alternative distance measures

    NASA Astrophysics Data System (ADS)

    Skøien, J. O.; Gottschalk, L.; Leblois, E.

    2009-04-01

    Whereas geostatistical and objective methods mostly have been developed for observations with point support or a regular support, e.g. runoff related data can be assumed to have an irregular support in space, and sometimes also a temporal support. The correlations between observations and between observations and the prediction location are found through an integration of a point variogram or point correlation function, a method known as regularisation. Being a relatively simple method for observations with equal and regular support, it can be computationally demanding if the observations have irregular support. With improved speed of computers, solving such integrations has become easier, but there can still be numerical problems that are not easily solved even with high-resolution computations. This can particularly be a problem in hydrological sciences where catchments are overlapping, the correlations are high, and small numerical errors can give ill-posed covariance matrices. The problem increases with increasing number of spatial and/or temporal dimensions. Gottschalk [1993a; 1993b] suggested to replace the integration by a Taylor expansion, hence reducing the computation time considerably, and also expecting less numerical problems with the covariance matrices. In practice, the integrated correlation/semivariance between observations are replaced by correlations/semivariances using the so called Ghosh-distance. Although Gottschalk and collaborators have used the Ghosh-distance also in other papers [Sauquet, et al., 2000a; Sauquet, et al., 2000b], the properties of the simplification have not been examined in detail. Hence, we will here analyse the replacement of the integration by the use of Ghosh-distances, both in sense of the ability to reproduce regularised semivariogram and correlation values, and the influence on the final interpolated maps. Comparisons will be performed both for real observations with a support (hydrological data) and for more hypothetical observations with regular supports where analytical expressions for the regularised semivariances/correlations in some cases can be derived. The results indicate that the simplification is useful for spatial interpolation when the support of the observations has to be taken into account. The difference in semivariogram value or correlation value between the simplified method and the full integration is limited on short distances, increasing for larger distances. However, this is to some degree taken into account while fitting a model for the point process, so that the results after interpolation are less affected by the simplification. The method is of particular use if computation time is of importance, e.g. in the case of real-time mapping procedures. Gottschalk, L. (1993a) Correlation and covariance of runoff, Stochastic Hydrology and Hydraulics, 7, 85-101. Gottschalk, L. (1993b) Interpolation of runoff applying objective methods, Stochastic Hydrology and Hydraulics, 7, 269-281. Sauquet, E., L. Gottschalk, and E. Leblois (2000a) Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme, Hydrological Sciences Journal, 45, 799-815. Sauquet, E., I. Krasovskaia, and E. Leblois (2000b) Mapping mean monthly runoff pattern using EOF analysis, Hydrology and Earth System Sciences, 4, 79-93.

  19. Correlation of Fin Buffet Pressures on an F/A-18 with Scaled Wind-Tunnel Measurements

    NASA Technical Reports Server (NTRS)

    Moses, Robert W.; Shah, Gautam H.

    1999-01-01

    Buffeting is an aeroelastic phenomenon occurring at high angles of attack that plagues high performance aircraft, especially those with twin vertical tails. Previous wind-tunnel and flight tests were conducted to characterize the buffet loads on the vertical tails by measuring surface pressures, bending moments, and accelerations. Following these tests, buffeting responses were computed using the measured buffet pressures and compared to the measured buffeting responses. The calculated results did not match the measured data because the assumed spatial correlation of the buffet pressures was not correct. A better understanding of the partial (spatial) correlation of the differential buffet pressures on the tail was necessary to improve the buffeting predictions. Several wind-tunnel investigations were conducted for this purpose. When compared, the results of these tests show that the partial correlation scales with flight conditions. One of the remaining questions is whether the wind-tunnel data is consistent with flight data. Presented herein, cross-spectra and coherence functions calculated from pressures that were measured on the High Alpha Research Vehicle indicate that the partial correlation of the buffet pressures in flight agrees with the partial correlation observed in the wind tunnel.

  20. Tracing the origin of azimuthal gluon correlations in the color glass condensate

    NASA Astrophysics Data System (ADS)

    Lappi, T.; Schenke, B.; Schlichting, S.; Venugopalan, R.

    2016-01-01

    We examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. We will show how a recently introduced color field domain model that captures key features of the observed azimuthal correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.

  1. Nonequilibrium fluctuations in metaphase spindles: polarized light microscopy, image registration, and correlation functions

    NASA Astrophysics Data System (ADS)

    Brugués, Jan; Needleman, Daniel J.

    2010-02-01

    Metaphase spindles are highly dynamic, nonequilibrium, steady-state structures. We study the internal fluctuations of spindles by computing spatio-temporal correlation functions of movies obtained from quantitative polarized light microscopy. These correlation functions are only physically meaningful if corrections are made for the net motion of the spindle. We describe our image registration algorithm in detail and we explore its robustness. Finally, we discuss the expression used for the estimation of the correlation function in terms of the nematic order of the microtubules which make up the spindle. Ultimately, studying the form of these correlation functions will provide a quantitative test of the validity of coarse-grained models of spindle structure inspired from liquid crystal physics.

  2. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  3. Fast Legendre moment computation for template matching

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Normalized cross correlation (NCC) based template matching is insensitive to intensity changes and it has many applications in image processing, object detection, video tracking and pattern recognition. However, normalized cross correlation implementation is computationally expensive since it involves both correlation computation and normalization implementation. In this paper, we propose Legendre moment approach for fast normalized cross correlation implementation and show that the computational cost of this proposed approach is independent of template mask sizes which is significantly faster than traditional mask size dependent approaches, especially for large mask templates. Legendre polynomials have been widely used in solving Laplace equation in electrodynamics in spherical coordinate systems, and solving Schrodinger equation in quantum mechanics. In this paper, we extend Legendre polynomials from physics to computer vision and pattern recognition fields, and demonstrate that Legendre polynomials can help to reduce the computational cost of NCC based template matching significantly.

  4. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  5. Defect Genome of Cubic Perovskites for Fuel Cell Applications

    DOE PAGES

    Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...

    2017-10-10

    Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less

  6. Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang

    2016-10-01

    Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.

  7. Modified signed-digit trinary addition using synthetic wavelet filter

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, K. M.; Razzaque, M. A.

    2000-09-01

    The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.

  8. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  9. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.

  10. Determination of high temperature strains using a PC based vision system

    NASA Astrophysics Data System (ADS)

    McNeill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.

    1992-09-01

    With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.

  11. Unstructured Grid Euler Method Assessment for Longitudinal and Lateral/Directional Aerodynamic Performance Analysis of the HSR Technology Concept Airplane at Supersonic Cruise Speed

    NASA Technical Reports Server (NTRS)

    Ghaffari, Farhad

    1999-01-01

    Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.

  12. Sources of computer self-efficacy: The relationship to outcome expectations, computer anxiety, and intention to use computers

    NASA Astrophysics Data System (ADS)

    Antoine, Marilyn V.

    2011-12-01

    The purpose of this research was to extend earlier research on sources of selfefficacy (Lent, Lopez, & Biechke, 1991; Usher & Pajares, 2009) to the information technology domain. The principal investigator examined how Bandura's (1977) sources of self-efficacy information---mastery experience, vicarious experience, verbal persuasion, and physiological states---shape computer self-efficacy beliefs and influence the decision to use or not use computers. The study took place at a mid-sized Historically Black College or University in the South. A convenience sample of 105 undergraduates was drawn from students enrolled in multiple sections of two introductory computer courses. There were 67 females and 38 males. This research was a correlational study of the following variables: sources of computer self-efficacy, general computer self-efficacy, outcome expectations, computer anxiety, and intention to use computers. The principal investigator administered a survey questionnaire containing 52 Likert items to measure the major study variables. Additionally, the survey instrument collected demographic variables such as gender, age, race, intended major, classification, technology use, technology adoption category, and whether the student owns a computer. The results reveal the following: (1) Mastery experience and verbal persuasion had statistically significant relationships to general computer self-efficacy, while vicarious experience and physiological states had non-significant relationships. Mastery experience had the strongest correlation to general computer self-efficacy. (2) All of the sources of computer self-efficacy had statistically significant relationships to personal outcome expectations. Vicarious experience had the strongest correlation to personal outcome expectations. (3) All of the sources of self-efficacy had statistically significant relationships to performance outcome expectations. Vicarious experience had the strongest correlation to performance outcome expectations. (4) Mastery experience and physiological states had statistically significant relationships to computer anxiety, while vicarious experience and verbal persuasion had non-significant relationships. Physiological states had the strongest correlation to computer anxiety. (5) Mastery experience, vicarious experience, and physiological states had statistically significant relationships to intention to use computers, while verbal persuasion had a non-significant relationship. Mastery experience had the strongest correlation to intention to use computers. Gender-related findings indicate that females reported higher average mastery experience, vicarious experience, physiological states, and intention to use computers than males. Females reported lower average general computer self-efficacy, computer anxiety, verbal persuasion, personal outcome expectations, and performance outcome expectations than males. The results of this study can be used to develop strategies for increasing general computer self-efficacy, outcome expectations, and intention to use computers. The results can also be used to develop strategies for reducing computer anxiety.

  13. On the computation of molecular surface correlations for protein docking using fourier techniques.

    PubMed

    Sakk, Eric

    2007-08-01

    The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.

  14. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  15. Predicting In Vivo Anti-Hepatofibrotic Drug Efficacy Based on In Vitro High-Content Analysis

    PubMed Central

    Zheng, Baixue; Tan, Looling; Mo, Xuejun; Yu, Weimiao; Wang, Yan; Tucker-Kellogg, Lisa; Welsch, Roy E.; So, Peter T. C.; Yu, Hanry

    2011-01-01

    Background/Aims Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. Methods High-content analysis (HCA) was performed with 49 drugs on hepatic stellate cells (HSCs) LX-2 stained with 10 fibrotic markers. ∼0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A systematic literature search on the in vivo effects of all 49 drugs on hepatofibrotic rats yields 28 papers with histological scores. The in vivo and in vitro datasets were used to compute a single efficacy predictor (Epredict). Results We used in vivo data from one context (CCl4 rats with drug treatments) to optimize the computation of Epredict. This optimized relationship was independently validated using in vivo data from two different contexts (treatment of DMN rats and prevention of CCl4 induction). A linear in vitro-in vivo correlation was consistently observed in all the three contexts. We used Epredict values to cluster drugs according to efficacy; and found that high-efficacy drugs tended to target proliferation, apoptosis and contractility of HSCs. Conclusions The Epredict statistic, based on a prioritized combination of in vitro features, provides a better correlation between in vitro and in vivo drug response than any of the traditional in vitro markers considered. PMID:22073152

  16. Calculating Mass Diffusion in High-Pressure Binary Fluids

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2004-01-01

    A comprehensive mathematical model of mass diffusion has been developed for binary fluids at high pressures, including critical and supercritical pressures. Heretofore, diverse expressions, valid for limited parameter ranges, have been used to correlate high-pressure binary mass-diffusion-coefficient data. This model will likely be especially useful in the computational simulation and analysis of combustion phenomena in diesel engines, gas turbines, and liquid rocket engines, wherein mass diffusion at high pressure plays a major role.

  17. Heat transfer and flow friction correlations for perforated plate matrix heat exchangers

    NASA Astrophysics Data System (ADS)

    Ratna Raju, L.; Kumar, S. Sunil; Chowdhury, K.; Nandi, T. K.

    2017-02-01

    Perforated plate matrix heat exchangers (MHE) are constructed of high conductivity perforated plates stacked alternately with low conductivity spacers. They are being increasingly used in many cryogenic applications including Claude cycle or Reversed Brayton cycle cryo-refrigerators and liquefiers. Design of high NTU (number of (heat) transfer unit) cryogenic MHEs requires accurate heat transfer coefficient and flow friction factor. Thermo-hydraulic behaviour of perforated plates strongly depends on the geometrical parameters. Existing correlations, however, are mostly expressed as functions of Reynolds number only. This causes, for a given configuration, significant variations in coefficients from one correlation to the other. In this paper we present heat transfer and flow friction correlations as functions of all geometrical and other controlling variables. A FluentTM based numerical model has been developed for heat transfer and pressure drop studies over a stack of alternately arranged perforated plates and spacers. The model is validated with the data from literature. Generalized correlations are obtained through regression analysis over a large number of computed data.

  18. Responsibility modulates the neural correlates of regret during the sequential risk-taking task.

    PubMed

    Li, Lin; Liu, Zhiyuan; Niu, Huanghuang; Zheng, Li; Cheng, Xuemei; Sun, Peng; Zhou, Fanzhi Anita; Guo, Xiuyan

    2018-03-01

    Responsibility is a necessary prerequisite in the experience of regret. The present fMRI study investigated the modulation of responsibility on the neural correlates of regret during a sequential risk-taking task. Participants were asked to open a series of boxes consecutively and decided when to stop. Each box contained a reward, except for one containing a devil to zero participant's gain in the trial. Once participants stopped, both collected gains and missed chances were revealed. We manipulated responsibility by setting two different contexts. In the Self (high responsibility) context, participants opened boxes and decided when to stop by themselves. In the Computer (low responsibility) context, a computer program opened boxes and decided when to stop for participants. Before each trial, participants were required to decide whether it would be a Self or a Computer context. Behaviorally, participants felt less regret (more relief) for gain outcome and more regret for the loss outcome in the high-responsibility context than low responsibility context. At the neural level, when experiencing a gain, high-responsibility trials were characterized by stronger activation in mPFC, pgACC, mOFC, and striatum with decreasing number of missed chances relative to low responsibility trials. When experiencing a loss, low responsibility trials were associated with stronger activation in dACC and bilateral insula than high-responsibility trials. Conversely, during a loss, high-responsibility trials showed more striatum activity than low responsibility trials. These results highlighted the sensitivity of the frontal region, striatum, and insula to changes in level of responsibility.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lappi, T.; Schenke, B.; Schlichting, S.

    Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less

  20. Short-time windowed covariance: A metric for identifying non-stationary, event-related covariant cortical sites

    PubMed Central

    Blakely, Timothy; Ojemann, Jeffrey G.; Rao, Rajesh P.N.

    2014-01-01

    Background Electrocorticography (ECoG) signals can provide high spatio-temporal resolution and high signal to noise ratio recordings of local neural activity from the surface of the brain. Previous studies have shown that broad-band, spatially focal, high-frequency increases in ECoG signals are highly correlated with movement and other cognitive tasks and can be volitionally modulated. However, significant additional information may be present in inter-electrode interactions, but adding additional higher order inter-electrode interactions can be impractical from a computational aspect, if not impossible. New method In this paper we present a new method of calculating high frequency interactions between electrodes called Short-Time Windowed Covariance (STWC) that builds on mathematical techniques currently used in neural signal analysis, along with an implementation that accelerates the algorithm by orders of magnitude by leveraging commodity, off-the-shelf graphics processing unit (GPU) hardware. Results Using the hardware-accelerated implementation of STWC, we identify many types of event-related inter-electrode interactions from human ECoG recordings on global and local scales that have not been identified by previous methods. Unique temporal patterns are observed for digit flexion in both low- (10 mm spacing) and high-resolution (3 mm spacing) electrode arrays. Comparison with existing methods Covariance is a commonly used metric for identifying correlated signals, but the standard covariance calculations do not allow for temporally varying covariance. In contrast STWC allows and identifies event-driven changes in covariance without identifying spurious noise correlations. Conclusions: STWC can be used to identify event-related neural interactions whose high computational load is well suited to GPU capabilities. PMID:24211499

  1. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  2. Concepts in receptor optimization: targeting the RGD peptide.

    PubMed

    Chen, Wei; Chang, Chia-en; Gilson, Michael K

    2006-04-12

    Synthetic receptors have a wide range of potential applications, but it has been difficult to design low molecular weight receptors that bind ligands with high, "proteinlike" affinities. This study uses novel computational methods to understand why it is hard to design a high-affinity receptor and to explore the limits of affinity, with the bioactive peptide RGD as a model ligand. The M2 modeling method is found to yield excellent agreement with experiment for a known RGD receptor and then is used to analyze a series of receptors generated in silico with a de novo design algorithm. Forces driving binding are found to be systematically opposed by proportionate repulsions due to desolvation and entropy. In particular, strong correlations are found between Coulombic attractions and the electrostatic desolvation penalty and between the mean energy change on binding and the cost in configurational entropy. These correlations help explain why it is hard to achieve high affinity. The change in surface area upon binding is found to correlate poorly with affinity within this series. Measures of receptor efficiency are formulated that summarize how effectively a receptor uses surface area, total energy, and Coulombic energy to achieve affinity. Analysis of the computed efficiencies suggests that a low molecular weight receptor can achieve proteinlike affinity. It is also found that macrocyclization of a receptor can, unexpectedly, increase the entropy cost of binding because the macrocyclic structure further restricts ligand motion.

  3. High-level ab initio potential energy surface and dynamics of the F- + CH3I SN2 and proton-transfer reactions.

    PubMed

    Olasz, Balázs; Szabó, István; Czakó, Gábor

    2017-04-01

    Bimolecular nucleophilic substitution (S N 2) and proton transfer are fundamental processes in chemistry and F - + CH 3 I is an important prototype of these reactions. Here we develop the first full-dimensional ab initio analytical potential energy surface (PES) for the F - + CH 3 I system using a permutationally invariant fit of high-level composite energies obtained with the combination of the explicitly-correlated CCSD(T)-F12b method, the aug-cc-pVTZ basis, core electron correlation effects, and a relativistic effective core potential for iodine. The PES accurately describes the S N 2 channel producing I - + CH 3 F via Walden-inversion, front-side attack, and double-inversion pathways as well as the proton-transfer channel leading to HF + CH 2 I - . The relative energies of the stationary points on the PES agree well with the new explicitly-correlated all-electron CCSD(T)-F12b/QZ-quality benchmark values. Quasiclassical trajectory computations on the PES show that the proton transfer becomes significant at high collision energies and double-inversion as well as front-side attack trajectories can occur. The computed broad angular distributions and hot internal energy distributions indicate the dominance of indirect mechanisms at lower collision energies, which is confirmed by analyzing the integration time and leaving group velocity distributions. Comparison with available crossed-beam experiments shows usually good agreement.

  4. Optical image hiding based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  5. Barrierless association of CF2 and dissociation of C2F4 by variational transition-state theory and system-specific quantum Rice–Ramsperger–Kassel theory

    PubMed Central

    Bao, Junwei Lucas; Zhang, Xin

    2016-01-01

    Bond dissociation is a fundamental chemical reaction, and the first principles modeling of the kinetics of dissociation reactions with a monotonically increasing potential energy along the dissociation coordinate presents a challenge not only for modern electronic structure methods but also for kinetics theory. In this work, we use multifaceted variable-reaction-coordinate variational transition-state theory (VRC-VTST) to compute the high-pressure limit dissociation rate constant of tetrafluoroethylene (C2F4), in which the potential energies are computed by direct dynamics with the M08-HX exchange correlation functional. To treat the pressure dependence of the unimolecular rate constants, we use the recently developed system-specific quantum Rice–Ramsperger–Kassel theory. The calculations are carried out by direct dynamics using an exchange correlation functional validated against calculations that go beyond coupled-cluster theory with single, double, and triple excitations. Our computed dissociation rate constants agree well with the recent experimental measurements. PMID:27834727

  6. Barrierless association of CF2 and dissociation of C2F4 by variational transition-state theory and system-specific quantum Rice-Ramsperger-Kassel theory.

    PubMed

    Bao, Junwei Lucas; Zhang, Xin; Truhlar, Donald G

    2016-11-29

    Bond dissociation is a fundamental chemical reaction, and the first principles modeling of the kinetics of dissociation reactions with a monotonically increasing potential energy along the dissociation coordinate presents a challenge not only for modern electronic structure methods but also for kinetics theory. In this work, we use multifaceted variable-reaction-coordinate variational transition-state theory (VRC-VTST) to compute the high-pressure limit dissociation rate constant of tetrafluoroethylene (C 2 F 4 ), in which the potential energies are computed by direct dynamics with the M08-HX exchange correlation functional. To treat the pressure dependence of the unimolecular rate constants, we use the recently developed system-specific quantum Rice-Ramsperger-Kassel theory. The calculations are carried out by direct dynamics using an exchange correlation functional validated against calculations that go beyond coupled-cluster theory with single, double, and triple excitations. Our computed dissociation rate constants agree well with the recent experimental measurements.

  7. Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.

    PubMed

    Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori

    2014-06-14

    Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.

  8. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE PAGES

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2017-11-09

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  9. Collective charge excitations and the metal-insulator transition in the square lattice Hubbard-Coulomb model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    Here in this article, we discuss the nontrivial collective charge excitations (plasmons) of the extended square lattice Hubbard model. Using a fully nonperturbative approach, we employ the hybrid Monte Carlo algorithm to simulate the system at half-filling. A modified Backus-Gilbert method is introduced to obtain the spectral functions via numerical analytic continuation. We directly compute the single-particle density of states which demonstrates the formation of Hubbard bands in the strongly correlated phase. The momentum-resolved charge susceptibility also is computed on the basis of the Euclidean charge-density-density correlator. In agreement with previous extended dynamical mean-field theory studies, we find that, atmore » high strength of the electron-electron interaction, the plasmon dispersion develops two branches.« less

  10. Quantum key distribution with an efficient countermeasure against correlated intensity fluctuations in optical pulses

    NASA Astrophysics Data System (ADS)

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa

    2018-03-01

    Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.

  11. Multipartite quantum correlations in the extended J1-J2 Heisenberg model

    NASA Astrophysics Data System (ADS)

    Batle, J.; Tarawneh, O.; Nagata, Koji; Nakamura, Tadao; Abdalla, S.; Farouk, Ahmed

    2017-11-01

    Multipartite entanglement and the maximum violation of Bell inequalities are studied in finite clusters of spins in an extended J1-J2 Heisenberg model at zero temperature. The ensuing highly frustrated states will unveil a rich structure for different values of the corresponding spin-spin interaction strengths. The interplay between nearest-neighbors, next-nearest neighbors and further couplings will be explored using multipartite correlations. The model is relevant to certain quantum annealing computation architectures where an all-to-all connectivity is considered.

  12. [The characteristics of computer simulation of traffic accidents].

    PubMed

    Zou, Dong-Hua; Liu, Ning-Guo; Chen, Jian-Guo; Jin, Xian-Long; Zhang, Xiao-Yun; Zhang, Jian-Hua; Chen, Yi-Jiu

    2008-12-01

    To reconstruct the collision process of traffic accident and the injury mode of the victim by computer simulation technology in forensic assessment of traffic accident. Forty actual accidents were reconstructed by stimulation software and high performance computer based on analysis of the trace evidences at the scene, damage of the vehicles and injury of the victims, with 2 cases discussed in details. The reconstruction correlated very well in 28 cases, well in 9 cases, and suboptimal in 3 cases with the above parameters. Accurate reconstruction of the accident would be helpful for assessment of the injury mechanism of the victims. Reconstruction of the collision process of traffic accident and the injury mechanism of the victim by computer simulation is useful in traffic accident assessment.

  13. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  14. Multifunctional optical correlator for picosecond ultraviolet laser pulse measurement

    DOE PAGES

    Rakhman, Abdurahim; Wang, Yang; Garcia, Frances; ...

    2014-01-01

    A compact optical correlator system that measures both the autocorrelation between two infrared (IR) lights and the cross-correlation between an IR and an ultraviolet (UV) light using a single nonlinear optical crystal has been designed and experimentally demonstrated. The rapid scanning of optical delay line, switching between auto and cross-correlations, crystal angle tuning, and data acquisition and processing are all computer controlled. Pulse widths of an IR light from a mode-locked laser are measured by the correlator and the results are compared with a direct measurement using a high-speed photodetector system. The correlator has been used to study the parametermore » dependence of the pulse width of a macropulse UV laser designed for laser-assisted hydrogen ion (H-) beam stripping for the Spallation Neutron Source at Oak Ridge National Laboratory.« less

  15. Diffusion spectral imaging modules correlate with EEG LORETA neuroimaging modules.

    PubMed

    Thatcher, Robert W; North, Duane M; Biver, Carl J

    2012-05-01

    The purpose of this study was to test the hypothesis that the highest temporal correlations between 3-dimensional EEG current source density corresponds to anatomical Modules of high synaptic connectivity. Eyes closed and eyes open EEG was recorded from 19 scalp locations with a linked ears reference from 71 subjects age 13-42 years. LORETA was computed from 1 to 30 Hz in 2,394 cortical gray matter voxels that were grouped into six anatomical Modules corresponding to the ROIs in the Hagmann et al.'s [2008] diffusion spectral imaging (DSI) study. All possible cross-correlations between voxels within a DSI Module were compared with the correlations between Modules. The Hagmann et al. [ 2008] Module correlation structure was replicated in the correlation structure of EEG three-dimensional current source density. EEG Temporal correlation between brain regions is related to synaptic density as measured by diffusion spectral imaging. Copyright © 2011 Wiley-Liss, Inc.

  16. Tracing the origin of azimuthal gluon correlations in the color glass condensate

    DOE PAGES

    Lappi, T.; Schenke, B.; Schlichting, S.; ...

    2016-01-11

    Here we examine the origins of azimuthal correlations observed in high energy proton-nucleus collisions by considering the simple example of the scattering of uncorrelated partons off color fields in a large nucleus. We demonstrate how the physics of fluctuating color fields in the color glass condensate (CGC) effective theory generates these azimuthal multiparticle correlations and compute the corresponding Fourier coefficients v n within different CGC approximation schemes. We discuss in detail the qualitative and quantitative differences between the different schemes. Lastly, we will show how a recently introduced color field domain model that captures key features of the observed azimuthalmore » correlations can be understood in the CGC effective theory as a model of non-Gaussian correlations in the target nucleus.« less

  17. Dynamics and Energetics of Deformable Evaporating Droplets at Intermediate Reynolds Numbers.

    NASA Astrophysics Data System (ADS)

    Haywood, Ross Jeffrey

    The behaviour of vaporizing droplets, representative of droplets present in hydrocarbon fuel sprays, has been investigated. A finite volume numerical model using a non-orthogonal, adaptive grid has been developed to examine both steady deformed and transient deforming droplet behaviour. Computations are made of the shapes of, and the velocity, pressure, temperature and concentration fields around and within n-heptane droplets evaporating in high temperature air environments at intermediate Reynolds and Weber numbers (10 <= Re <= 100, We <= 10). The numerical model has been rigorously tested by comparison with existing theoretical and numerical solutions and experimental data for problems of intermediate Reynolds number flows over spheroids, inviscid deforming droplets, viscous oscillating droplets, and transient deforming liquid droplets subjected to electrostatic fields. Computations show steady deformed droplets assuming oblate shapes with major axes perpendicular to the mean flow direction. When based on volume equivalent diameters, existing quasi-steady correlations of Nusselt and Sherwood numbers (Renksizbulut and Yuen (1983), Haywood et al. (1989), and Renksizbulut et al. (1991)) for spherical droplets are in good agreement with the numerical results. Providing they are based on actual frontal area, the computed drag coefficients are also reasonably well predicted by the existing quasi-steady drag correlation (Haywood et al. (1989), Renksizbulut and Yuen (1983)). A new correlation is developed for the total drag coefficient of quasi-steady deformed vaporizing droplets. The computed transient histories of droplets injected with an initial Reynolds number of 100 into 1000 K air at 1 and 10 atmospheres ambient pressure show strongly damped initial oscillations at frequencies within 25 percent of the theoretical natural frequency of Lamb (1932). Gas phase shear induced circulation within the droplets is responsible for the observed strong damping and promotes the formation of prolate shapes. The computed rates of heat and mass transfer of transient deforming drops are well predicted by the quasi-steady correlations indicated above.

  18. A Computational Method to Determine Glucose Infusion Rates for Isoglycemic Intravenous Glucose Infusion Study.

    PubMed

    Choi, Karam; Lee, Jung Chan; Oh, Tae Jung; Kim, Myeungseon; Kim, Hee Chan; Cho, Young Min; Kim, Sungwan

    2016-01-01

    The results of the isoglycemic intravenous glucose infusion (IIGI) study need to mimic the dynamic glucose profiles during the oral glucose tolerance test (OGTT) to accurately calculate the incretin effect. The glucose infusion rates during IIGI studies have historically been determined by experienced research personnel using the manual ad-hoc method. In this study, a computational method was developed to automatically determine the infusion rates for IIGI study based on a glucose-dynamics model. To evaluate the computational method, 18 subjects with normal glucose tolerance underwent a 75 g OGTT. One-week later, Group 1 (n = 9) and Group 2 (n = 9) underwent IIGI studies using the ad-hoc method and the computational method, respectively. Both methods were evaluated using correlation coefficient, mean absolute relative difference (MARD), and root mean square error (RMSE) between the glucose profiles from the OGTT and the IIGI study. The computational method exhibited significantly higher correlation (0.95 ± 0.03 versus 0.86 ± 0.10, P = 0.019), lower MARD (8.72 ± 1.83% versus 13.11 ± 3.66%, P = 0.002), and lower RMSE (10.33 ± 1.99 mg/dL versus 16.84 ± 4.43 mg/dL, P = 0.002) than the ad-hoc method. The computational method can facilitate IIGI study, and enhance its accuracy and stability. Using this computational method, a high-quality IIGI study can be accomplished without the need for experienced personnel.

  19. Three-dimensional evaluation of human jaw bone microarchitecture: correlation between the microarchitectural parameters of cone beam computed tomography and micro-computer tomography.

    PubMed

    Kim, Jo-Eun; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Huh, Kyung-Hoe

    2015-12-01

    To evaluate the potential feasibility of cone beam computed tomography (CBCT) in the assessment of trabecular bone microarchitecture. Sixty-eight specimens from four pairs of human jaw were scanned using both micro-computed tomography (micro-CT) of 19.37-μm voxel size and CBCT of 100-μm voxel size. The correlation of 3-dimensional parameters between CBCT and micro-CT was evaluated. All parameters, except bone-specific surface and trabecular thickness, showed linear correlations between the 2 imaging modalities (P < .05). Among the parameters, bone volume, percent bone volume, trabecular separation, and degree of anisotropy (DA) of CBCT images showed strong correlations with those of micro-CT images. DA showed the strongest correlation (r = 0.693). Most microarchitectural parameters from CBCT were correlated with those from micro-CT. Some microarchitectural parameters, especially DA, could be used as strong predictors of bone quality in the human jaw. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Stanford automatic photogrammetry research

    NASA Technical Reports Server (NTRS)

    Quam, L. H.; Hannah, M. J.

    1974-01-01

    A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.

  1. Computer versus Counselor Interpretation of Interest Inventories: The Case of the Self-Directed Search.

    ERIC Educational Resources Information Center

    Gati, Itamar; Blumberg, Dani

    1991-01-01

    Examined interpretations of 100 career counselee's responses to Self-Directed Search (SDS). Found that agreement between scales identified as relevant was as high as agreement among counselors, insignificant correlations between counselors' judgments of counselee's degree of interest crystallization and Holland's (1985) measure of consistency, and…

  2. Development of a Computational (in silico) Model of Ocular Teratogenesis

    EPA Science Inventory

    EPA’s ToxCast™ project is profiling the in vitro bioactivity of chemical compounds to assess pathway-level and cell-based signatures that are highly correlated with observed in vivo toxicity. In silico models provide a framework for interpreting the in vitro results and for simul...

  3. Validation study of an electronic method of condensed outcomes tools reporting in orthopaedics.

    PubMed

    Farr, Jack; Verma, Nikhil; Cole, Brian J

    2013-12-01

    Patient-reported outcomes (PRO) instruments are a vital source of data for evaluating the efficacy of medical treatments. Historically, outcomes instruments have been designed, validated, and implemented as paper-based questionnaires. The collection of paper-based outcomes information may result in patients becoming fatigued as they respond to redundant questions. This problem is exacerbated when multiple PRO measures are provided to a single patient. In addition, the management and analysis of data collected in paper format involves labor-intensive processes to score and render the data analyzable. Computer-based outcomes systems have the potential to mitigate these problems by reformatting multiple outcomes tools into a single, user-friendly tool.The study aimed to determine whether the electronic outcomes system presented produces results comparable with the test-retest correlations reported for the corresponding orthopedic paper-based outcomes instruments.The study is designed as a crossover study based on consecutive orthopaedic patients arriving at one of two designated orthopedic knee clinics.Patients were assigned to complete either a paper or a computer-administered questionnaire based on a similar set of questions (Knee injury and Osteoarthritis Outcome Score, International Knee Documentation Committee form, 36-Item Short Form survey, version 1, Lysholm Knee Scoring Scale). Each patient completed the same surveys using the other instrument, so that all patients had completed both paper and electronic versions. Correlations between the results from the two modes were studied and compared with test-retest data from the original validation studies.The original validation studies established test-retest reliability by computing correlation coefficients for two administrations of the paper instrument. Those correlation coefficients were all in the range of 0.7 to 0.9, which was deemed satisfactory. The present study computed correlation coefficients between the paper and electronic modes of administration. These correlation coefficients demonstrated similar results with an overall value of 0.86.On the basis of the correlation coefficients, the electronic application of commonly used knee outcome scores compare variably to the traditional paper variants with a high rate of test-retest correlation. This equivalence supports the use of the condensed electronic outcomes system and validates comparison of scores between electronic and paper modes. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  4. Numericware i: Identical by State Matrix Calculator

    PubMed Central

    Kim, Bongsong; Beavis, William D

    2017-01-01

    We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375

  5. Long-Term Structural Health Monitoring System for a High-Speed Railway Bridge Structure.

    PubMed

    Ding, You-Liang; Wang, Gao-Xin; Sun, Peng; Wu, Lai-Yi; Yue, Qing

    2015-01-01

    Nanjing Dashengguan Bridge, which serves as the shared corridor crossing Yangtze River for both Beijing-Shanghai high-speed railway and Shanghai-Wuhan-Chengdu railway, is the first 6-track high-speed railway bridge with the longest span throughout the world. In order to ensure safety and detect the performance deterioration during the long-time service of the bridge, a Structural Health Monitoring (SHM) system has been implemented on this bridge by the application of modern techniques in sensing, testing, computing, and network communication. The SHM system includes various sensors as well as corresponding data acquisition and transmission equipment for automatic data collection. Furthermore, an evaluation system of structural safety has been developed for the real-time condition assessment of this bridge. The mathematical correlation models describing the overall structural behavior of the bridge can be obtained with the support of the health monitoring system, which includes cross-correlation models for accelerations, correlation models between temperature and static strains of steel truss arch, and correlation models between temperature and longitudinal displacements of piers. Some evaluation results using the mean value control chart based on mathematical correlation models are presented in this paper to show the effectiveness of this SHM system in detecting the bridge's abnormal behaviors under the varying environmental conditions such as high-speed trains and environmental temperature.

  6. Quantum Monte Carlo for atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, R.N.

    1989-11-01

    The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less

  7. Correlation between Preoperative High Resolution Computed Tomography (CT) Findings with Surgical Findings in Chronic Otitis Media (COM) Squamosal Type.

    PubMed

    Karki, S; Pokharel, M; Suwal, S; Poudel, R

    Background The exact role of High resolution computed tomography (HRCT) temporal bone in preoperative assessment of Chronic suppurative otitis media atticoantral disease still remains controversial. Objective To evaluate the role of high resolution computed tomography temporal bone in Chronic suppurative otitis media atticoantral disease and to compare preoperative computed tomographic findings with intra-operative findings. Method Prospective, analytical study conducted among 65 patients with chronic suppurative otitis media atticoantral disease in Department of Radiodiagnosis, Kathmandu University Dhulikhel Hospital between January 2015 to July 2016. The operative findings were compared with results of imaging. The parameters of comparison were erosion of ossicles, scutum, facial canal, lateral semicircular canal, sigmoid and tegmen plate along with extension of disease to sinus tympani and facial recess. Sensitivity, specificity, negative predictive value, positive predictive values were calculated. Result High resolution computed tomography temporal bone offered sensitivity (Se) and specificity (Sp) of 100% for visualization of sigmoid and tegmen plate erosion. The performance of HRCT in detecting malleus (Se=100%, Sp=95.23%), incus (Se=100%,Sp=80.48%) and stapes (Se=96.55%, Sp=71.42%) erosion was excellent. It offered precise information about facial canal erosion (Se=100%, Sp=75%), scutum erosion (Se=100%, Sp=96.87%) and extension of disease to facial recess and sinus tympani (Se=83.33%,Sp=100%). high resolution computed tomography showed specificity of 100% for lateral semicircular canal erosion (Sp=100%) but with low sensitivity (Se=53.84%). Conclusion The findings of high resolution computed tomography and intra-operative findings were well comparable except for lateral semicircular canal erosion. high resolution computed tomography temporal bone acts as a road map for surgeon to identify the extent of disease, plan for appropriate procedure that is required and prepare for potential complications that can be encountered during surgery.

  8. Spatial Characteristics of F/A-18 Vertical Tail Buffet Pressures Measured in Flight

    NASA Technical Reports Server (NTRS)

    Moses, Robert W.; Shah, Gautam H.

    1998-01-01

    Buffeting is an aeroelastic phenomenon which plagues high performance aircraft, especially those with twin vertical tails, at high angles of attack. Previous wind-tunnel and flight tests were conducted to characterize the buffet loads on the vertical tails by measuring surface pressures, bending moments, and accelerations. Following these tests, buffeting estimates were computed using the measured buffet pressures and compared to the measured responses. The estimates did not match the measured data because the assumed spatial correlation of the buffet pressures was not correct. A better understanding of the partial (spatial) correlation of the differential buffet pressures on the tail was necessary to improve the buffeting estimates. Several wind-tunnel investigations were conducted for this purpose. When combined and compared, the results of these tests show that the partial correlation depends on and scales with flight conditions. One of the remaining questions is whether the windtunnel data is consistent with flight data. Presented herein, cross-spectra and coherence functions calculated from pressures that were measured on the high alpha research vehicle (HARV) indicate that the partial correlation of the buffet pressures in flight agrees with the partial correlation observed in the wind tunnel.

  9. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  10. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  11. Computed tomographic-based quantification of emphysema and correlation to pulmonary function and mechanics.

    PubMed

    Washko, George R; Criner, Gerald J; Mohsenifar, Zab; Sciurba, Frank C; Sharafkhaneh, Amir; Make, Barry J; Hoffman, Eric A; Reilly, John J

    2008-06-01

    Computed tomographic based indices of emphysematous lung destruction may highlight differences in disease pathogenesis and further enable the classification of subjects with Chronic Obstructive Pulmonary Disease. While there are multiple techniques that can be utilized for such radiographic analysis, there is very little published information comparing the performance of these methods in a clinical case series. Our objective was to examine several quantitative and semi-quantitative methods for the assessment of the burden of emphysema apparent on computed tomographic scans and compare their ability to predict lung mechanics and function. Automated densitometric analysis was performed on 1094 computed tomographic scans collected upon enrollment into the National Emphysema Treatment Trial. Trained radiologists performed an additional visual grading of emphysema on high resolution CT scans. Full pulmonary function test results were available for correlation, with a subset of subjects having additional measurements of lung static recoil. There was a wide range of emphysematous lung destruction apparent on the CT scans and univariate correlations to measures of lung function were of modest strength. No single method of CT scan analysis clearly outperformed the rest of the group. Quantification of the burden of emphysematous lung destruction apparent on CT scan is a weak predictor of lung function and mechanics in severe COPD with no uniformly superior method found to perform this analysis. The CT based quantification of emphysema may augment pulmonary function testing in the characterization of COPD by providing complementary phenotypic information.

  12. CFD: computational fluid dynamics or confounding factor dissemination? The role of hemodynamics in intracranial aneurysm rupture risk assessment.

    PubMed

    Xiang, J; Tutino, V M; Snyder, K V; Meng, H

    2014-10-01

    Image-based computational fluid dynamics holds a prominent position in the evaluation of intracranial aneurysms, especially as a promising tool to stratify rupture risk. Current computational fluid dynamics findings correlating both high and low wall shear stress with intracranial aneurysm growth and rupture puzzle researchers and clinicians alike. These conflicting findings may stem from inconsistent parameter definitions, small datasets, and intrinsic complexities in intracranial aneurysm growth and rupture. In Part 1 of this 2-part review, we proposed a unifying hypothesis: both high and low wall shear stress drive intracranial aneurysm growth and rupture through mural cell-mediated and inflammatory cell-mediated destructive remodeling pathways, respectively. In the present report, Part 2, we delineate different wall shear stress parameter definitions and survey recent computational fluid dynamics studies, in light of this mechanistic heterogeneity. In the future, we expect that larger datasets, better analyses, and increased understanding of hemodynamic-biologic mechanisms will lead to more accurate predictive models for intracranial aneurysm risk assessment from computational fluid dynamics. © 2014 by American Journal of Neuroradiology.

  13. Transonic Flow Field Analysis for Wing-Fuselage Configurations

    NASA Technical Reports Server (NTRS)

    Boppe, C. W.

    1980-01-01

    A computational method for simulating the aerodynamics of wing-fuselage configurations at transonic speeds is developed. The finite difference scheme is characterized by a multiple embedded mesh system coupled with a modified or extended small disturbance flow equation. This approach permits a high degree of computational resolution in addition to coordinate system flexibility for treating complex realistic aircraft shapes. To augment the analysis method and permit applications to a wide range of practical engineering design problems, an arbitrary fuselage geometry modeling system is incorporated as well as methodology for computing wing viscous effects. Configuration drag is broken down into its friction, wave, and lift induced components. Typical computed results for isolated bodies, isolated wings, and wing-body combinations are presented. The results are correlated with experimental data. A computer code which employs this methodology is described.

  14. Is ultrasound perfusion imaging capable of detecting mismatch? A proof-of-concept study in acute stroke patients.

    PubMed

    Reitmeir, Raluca; Eyding, Jens; Oertel, Markus F; Wiest, Roland; Gralla, Jan; Fischer, Urs; Giquel, Pierre-Yves; Weber, Stefan; Raabe, Andreas; Mattle, Heinrich P; Z'Graggen, Werner J; Beck, Jürgen

    2017-04-01

    In this study, we compared contrast-enhanced ultrasound perfusion imaging with magnetic resonance perfusion-weighted imaging or perfusion computed tomography for detecting normo-, hypo-, and nonperfused brain areas in acute middle cerebral artery stroke. We performed high mechanical index contrast-enhanced ultrasound perfusion imaging in 30 patients. Time-to-peak intensity of 10 ischemic regions of interests was compared to four standardized nonischemic regions of interests of the same patient. A time-to-peak >3 s (ultrasound perfusion imaging) or >4 s (perfusion computed tomography and magnetic resonance perfusion) defined hypoperfusion. In 16 patients, 98 of 160 ultrasound perfusion imaging regions of interests of the ischemic hemisphere were classified as normal, and 52 as hypoperfused or nonperfused. Ten regions of interests were excluded due to artifacts. There was a significant correlation of the ultrasound perfusion imaging and magnetic resonance perfusion or perfusion computed tomography (Pearson's chi-squared test 79.119, p < 0.001) (OR 0.1065, 95% CI 0.06-0.18). No perfusion in ultrasound perfusion imaging (18 regions of interests) correlated highly with diffusion restriction on magnetic resonance imaging (Pearson's chi-squared test 42.307, p < 0.001). Analysis of receiver operating characteristics proved a high sensitivity of ultrasound perfusion imaging in the diagnosis of hypoperfused area under the curve, (AUC = 0.917; p < 0.001) and nonperfused (AUC = 0.830; p < 0.001) tissue in comparison with perfusion computed tomography and magnetic resonance perfusion. We present a proof of concept in determining normo-, hypo-, and nonperfused tissue in acute stroke by advanced contrast-enhanced ultrasound perfusion imaging.

  15. Frequency Based Design Partitioning to Achieve Higher Throughput in Digital Cross Correlator for Aperture Synthesis Passive MMW Imager.

    PubMed

    Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang

    2018-04-17

    Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.

  16. Computer Activities for Persons With Dementia.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee

    2015-06-01

    The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  18. A Semi-Empirical Model for Forecasting Relativistic Electrons at Geostationary Orbit

    NASA Technical Reports Server (NTRS)

    Lyatsky, Wladislaw; Khazanov, George V.

    2008-01-01

    We developed a new prediction model for forecasting relativistic (>2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/Interplanetary Magnetic Field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is about 0.9. The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible. The correlation coefficient between predicted and actual electron fluxes is stable and incredibly high.

  19. Heralding efficiency and correlated-mode coupling of near-IR fiber-coupled photon pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, P. Ben; Rosenberg, Danna; Stelmakh, Veronika

    We report on a systematic experimental study of heralding efficiency and generation rate of telecom-band infrared photon pairs generated by spontaneous parametric down-conversion and coupled to single mode optical fibers. We define the correlated-mode coupling efficiency--an inherent source efficiency--and explain its relation to heralding efficiency. For our experiment, we developed a reconfigurable computer controlled pump-beam and collection-mode optical apparatus which we used to measure the generation rate and correlated-mode coupling efficiency. The use of low-noise, high-efficiency superconducting-nanowire single-photon-detectors in this setup allowed us to explore focus configurations with low overall photon flux. The measured data agree well with theory andmore » we demonstrated a correlated-mode coupling efficiency of 97%±2%, which is the highest efficiency yet achieved for this type of system. These results confirm theoretical treatments and demonstrate that very high overall heralding efficiencies can, in principle, be achieved in quantum optical systems. We expect that these results and techniques will be widely incorporated into future systems that require, or benefit from, a high heralding efficiency.« less

  20. Heralding efficiency and correlated-mode coupling of near-IR fiber-coupled photon pairs

    DOE PAGES

    Dixon, P. Ben; Rosenberg, Danna; Stelmakh, Veronika; ...

    2014-10-06

    We report on a systematic experimental study of heralding efficiency and generation rate of telecom-band infrared photon pairs generated by spontaneous parametric down-conversion and coupled to single mode optical fibers. We define the correlated-mode coupling efficiency--an inherent source efficiency--and explain its relation to heralding efficiency. For our experiment, we developed a reconfigurable computer controlled pump-beam and collection-mode optical apparatus which we used to measure the generation rate and correlated-mode coupling efficiency. The use of low-noise, high-efficiency superconducting-nanowire single-photon-detectors in this setup allowed us to explore focus configurations with low overall photon flux. The measured data agree well with theory andmore » we demonstrated a correlated-mode coupling efficiency of 97%±2%, which is the highest efficiency yet achieved for this type of system. These results confirm theoretical treatments and demonstrate that very high overall heralding efficiencies can, in principle, be achieved in quantum optical systems. We expect that these results and techniques will be widely incorporated into future systems that require, or benefit from, a high heralding efficiency.« less

  1. A modified cross-correlation method for white-light optical fiber extrinsic Fabry-Perot interferometric hydrogen sensors

    NASA Astrophysics Data System (ADS)

    Yang, Zhen; Zhang, Min; Liao, Yanbiao; Lai, Shurong; Tian, Qian; Li, Qisheng; Zhang, Yi; Zhuang, Zhi

    2009-11-01

    An extrinsic Fabry-Perot interferometric (EFPI) optical fiber hydrogen sensor based on palladium silver (Pd-Ag) film is designed for hydrogen leakage detection. A modified cross correlation signal processing method for an optical fiber EFPI hydrogen sensor is presented. As the applying of a special correlating factor which advises the effect on the fringe visibility of the gap length and wavelength, the cross correlation method has a high accuracy which is insensitive to light source power drift or changes in attenuation in the fiber, and the segment search method is employed to reduce computation and demodulating speed is fast. The Fabry-Perot gap length resolution of better than 0.2nm is achieved in a certain concentration of hydrogen.

  2. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.

  3. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  4. The reliability of continuous brain responses during naturalistic listening to music.

    PubMed

    Burunat, Iballa; Toiviainen, Petri; Alluri, Vinoo; Bogert, Brigitte; Ristaniemi, Tapani; Sams, Mikko; Brattico, Elvira

    2016-01-01

    Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Multi-Station Broad Regional Event Detection Using Waveform Correlation

    NASA Astrophysics Data System (ADS)

    Slinkard, M.; Stephen, H.; Young, C. J.; Eckert, R.; Schaff, D. P.; Richards, P. G.

    2013-12-01

    Previous waveform correlation studies have established the occurrence of repeating seismic events in various regions, and the utility of waveform-correlation event-detection on broad regional or even global scales to find events currently not included in traditionally-prepared bulletins. The computational burden, however, is high, limiting previous experiments to relatively modest template libraries and/or processing time periods. We have developed a distributed computing waveform correlation event detection utility that allows us to process years of continuous waveform data with template libraries numbering in the thousands. We have used this system to process several years of waveform data from IRIS stations in East Asia, using libraries of template events taken from global and regional bulletins. Detections at a given station are confirmed by 1) comparison with independent bulletins of seismicity, and 2) consistent detections at other stations. We find that many of the detected events are not in traditional catalogs, hence the multi-station comparison is essential. In addition to detecting the similar events, we also estimate magnitudes very precisely based on comparison with the template events (when magnitudes are available). We have investigated magnitude variation within detected families of similar events, false alarm rates, and the temporal and spatial reach of templates.

  6. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  7. VizieR Online Data Catalog: l Car radial velocity curves (Anderson, 2016)

    NASA Astrophysics Data System (ADS)

    Anderson, R. I.

    2018-02-01

    Line-of-sight (radial) velocities of the long-period classical Cepheid l Carinae were measured from 925 high-quality optical spectra recorded using the fiber-fed high-resolution (R~60,000) Coralie spectrograph located at the Euler telescope at La Silla Observatory, Chile. The data were taken between 2014 and 2016. This is the full version of Tab. 2 presented partially in the paper. Line shape parameters (depth, width, asymmetry) are listed for the computed cross-correlation profiles (CCFs). Radial velocities were determined using different techniques (Gaussian, bi-Gaussian) and measured on CCFs computed using three different numerical masks (G2, weak lines, strong lines). (1 data file).

  8. Transport properties of correlated metals: A dynamical mean field theory perspective

    NASA Astrophysics Data System (ADS)

    Deng, Xiaoyu

    Strongly correlated metals, including many transition metal oxides, are characterized by unconventional transport properties with anomalous temperature dependence. For example, in many systems Fermi liquid behavior holds only below an extremely low temperature while at high temperature these bad metals have large resistivity which exceeds the Mott-Ioffe-Regel (MIR) limit. Material specific calculation of these anomalous transport properties is an outstanding challenge. Recent advances enabled us to study the transport and optical properties of two archetypal correlated oxides, vanadium oxides and ruthenates, using the LDA +DMFT method. In V2O3, the prototypical Mott system, our computed resistivity and optical conductivity are in very good agreement with experimental measurements, which clearly demonstrates that the strong correlation dominates the transport of this material. Furthermore by expressing the resistivity in terms of an effective plasma frequency and an effective scattering rate, we uncover the so-called ''hidden Fermi liquid'' [1, 2, 3] behavior, in both the computed and measured optical response of V2O3. This paradigm explains the optics and transport in other materials such as NdNiO3 film and CaRuO3. In the ruthenates family, we carried out a systematical theoretical study on the transport properties of four metallic members, Sr2RuO4, Sr3Ru2O7, SrRuO3 and CaRuO3, which generally encapsulates the gradually structure evolution from two-dimension to three dimension. With a unified computational scheme, we are able to obtain the electronic structure and transport properties of all these materials. The computed effective mass enhancement, resistivity and optical conductivity are good agreement with experimental measurements, which indicates that electron-electron scattering dominates the transport of ruthenates. We explain why the single layered compound Sr2RuO4 has a relative weak correlation with respect to its siblings, which corroborates its good metallicity. Comparing our results with experimental data, benchmarks the capability as well as the limitations of existing methodologies for describing transport properties of realistic correlated materials. Supported by NSF DMR-1308141.

  9. Role of Combined 68Ga-DOTATOC and 18F-FDG Positron Emission Tomography/Computed Tomography in the Diagnostic Workup of Pancreas Neuroendocrine Tumors: Implications for Managing Surgical Decisions.

    PubMed

    Cingarlini, Sara; Ortolani, Silvia; Salgarello, Matteo; Butturini, Giovanni; Malpaga, Anna; Malfatti, Veronica; DʼOnofrio, Mirko; Davì, Maria Vittoria; Vallerio, Paola; Ruzzenente, Andrea; Capelli, Paola; Citton, Elia; Grego, Elisabetta; Trentin, Chiara; De Robertis, Riccardo; Scarpa, Aldo; Bassi, Claudio; Tortora, Giampaolo

    2017-01-01

    Ga-DOTATOC (Ga) positron emission tomography (PET)/computed tomography (CT) is recommended in the workup of pancreas neuroendocrine tumors (PanNETs); evidence suggests that F-FDG (F) PET/CT can also provide prognostic information. Aims of this study were to assess the role of combined Ga- and F-PET/CT in the evaluation of grade (G) 1-2 PanNETs and to test the correlation between F-PET/CT positivity and tumor grade. Preoperative Ga- and F-PET/CT of 35 patients with surgically resected G1-2 PanNETs were evaluated. For grading, the 2010 World Health Organization Classification was used; an ancillary analysis with Ki67 cutoffs at 5% to 20% was conducted. Correlation between F-PET/CT positivity (SUVmax > 3.5) and grade was assessed. Of 35 PanNETs, 28.6% and 71.4% were G1 and G2 as per World Health Organization. Ga-PET/CT showed high sensitivity (94.3%) in detecting G1-2 PanNETs. F-PET/CT was positive in 20% and 76% G1 and G2 tumors (P = 0.002). F-PET/CT identified G2 PanNETs with high positive predictive value (PPV, 90.5%). F-PET/CT correlated with tumor grade also in the ancillary analysis (P = 0.009). The high sensitivity of Ga-PET/CT in NET detection is known. The high PPV of F-PET/CT in the identification of G2 forms suggests its potential role in PanNETs prognostication and risk stratification.

  10. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  11. Use of cone beam computed tomography in identifying postmenopausal women with osteoporosis.

    PubMed

    Brasileiro, C B; Chalub, L L F H; Abreu, M H N G; Barreiros, I D; Amaral, T M P; Kakehasi, A M; Mesquita, R A

    2017-12-01

    The aim of this study is to correlate radiometric indices from cone beam computed tomography (CBCT) images and bone mineral density (BMD) in postmenopausal women. Quantitative CBCT indices can be used to screen for women with low BMD. Osteoporosis is a disease characterized by the deterioration of bone tissue and the consequent decrease in BMD and increase in bone fragility. Several studies have been performed to assess radiometric indices in panoramic images as low-BMD predictors. The aim of this study is to correlate radiometric indices from CBCT images and BMD in postmenopausal women. Sixty postmenopausal women with indications for dental implants and CBCT evaluation were selected. Dual-energy X-ray absorptiometry (DXA) was performed, and the patients were divided into normal, osteopenia, and osteoporosis groups, according to the World Health Organization (WHO) criteria. Cross-sectional images were used to evaluate the computed tomography mandibular index (CTMI), the computed tomography index (inferior) (CTI (I)) and computed tomography index (superior) (CTI (S)). Student's t test was used to compare the differences between the indices of the groups' intraclass correlation coefficient (ICC). Statistical analysis showed a high degree of interobserver and intraobserver agreement for all measurements (ICC > 0.80). The mean values of CTMI, CTI (S), and CTI (I) were lower in the osteoporosis group than in osteopenia and normal patients (p < 0.05). In comparing normal patients and women with osteopenia, there was no statistically significant difference in the mean value of CTI (I) (p = 0.075). Quantitative CBCT indices may help dentists to screen for women with low spinal and femoral bone mineral density so that they can refer postmenopausal women for bone densitometry.

  12. Ultralow-Power Digital Correlator for Microwave Polarimetry

    NASA Technical Reports Server (NTRS)

    Piepmeier, Jeffrey R.; Hass, K. Joseph

    2004-01-01

    A recently developed high-speed digital correlator is especially well suited for processing readings of a passive microwave polarimeter. This circuit computes the autocorrelations of, and the cross-correlations among, data in four digital input streams representing samples of in-phase (I) and quadrature (Q) components of two intermediate-frequency (IF) signals, denoted A and B, that are generated in heterodyne reception of two microwave signals. The IF signals arriving at the correlator input terminals have been digitized to three levels (-1,0,1) at a sampling rate up to 500 MHz. Two bits (representing sign and magnitude) are needed to represent the instantaneous datum in each input channel; hence, eight bits are needed to represent the four input signals during any given cycle of the sampling clock. The accumulation (integration) time for the correlation is programmable in increments of 2(exp 8) cycles of the sampling clock, up to a maximum of 2(exp 24) cycles. The basic functionality of the correlator is embodied in 16 correlation slices, each of which contains identical logic circuits and counters (see figure). The first stage of each correlation slice is a logic gate that computes one of the desired correlations (for example, the autocorrelation of the I component of A or the negative of the cross-correlation of the I component of A and the Q component of B). The sampling of the output of the logic gate output is controlled by the sampling-clock signal, and an 8-bit counter increments in every clock cycle when the logic gate generates output. The most significant bit of the 8-bit counter is sampled by a 16-bit counter with a clock signal at 2(exp 8) the frequency of the sampling clock. The 16-bit counter is incremented every time the 8-bit counter rolls over.

  13. Functional CAR models for large spatially correlated functional datasets.

    PubMed

    Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S

    2016-01-01

    We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.

  14. Adaptive allocation of decisionmaking responsibility between human and computer in multitask situations

    NASA Technical Reports Server (NTRS)

    Chu, Y.-Y.; Rouse, W. B.

    1979-01-01

    As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.

  15. Exploration computer applications to primary dispersion halos: Kougarok tin prospect, Seward Peninsula, Alaska, USA

    USGS Publications Warehouse

    Reid, Jeffrey C.

    1989-01-01

    Computer processing and high resolution graphics display of geochemical data were used to quickly, accurately, and efficiently obtain important decision-making information for tin (cassiterite) exploration, Seward Peninsula, Alaska (USA). Primary geochemical dispersion patterns were determined for tin-bearing intrusive granite phases of Late Cretaceous age with exploration bedrock lithogeochemistry at the Kougarok tin prospect. Expensive diamond drilling footage was required to reach exploration objectives. Recognition of element distribution and dispersion patterns was useful in subsurface interpretation and correlation, and to aid location of other holes.

  16. Resting-State Functional Connectivity and Network Analysis of Cerebellum with Respect to Crystallized IQ and Gender

    PubMed Central

    Pezoulas, Vasileios C.; Zervakis, Michalis; Michelogiannis, Sifis; Klados, Manousos A.

    2017-01-01

    During the last years, it has been established that the prefrontal and posterior parietal brain lobes, which are mostly related to intelligence, have many connections to cerebellum. However, there is a limited research investigating cerebellum's relationship with cognitive processes. In this study, the network of cerebellum was analyzed in order to investigate its overall organization in individuals with low and high crystallized Intelligence Quotient (IQ). Functional magnetic resonance imaging (fMRI) data were selected from 136 subjects in resting-state from the Human Connectome Project (HCP) database and were further separated into two IQ groups composed of 69 low-IQ and 67 high-IQ subjects. Cerebellum was parcellated into 28 lobules/ROIs (per subject) using a standard cerebellum anatomical atlas. Thereafter, correlation matrices were constructed by computing Pearson's correlation coefficients between the average BOLD time-series for each pair of ROIs inside the cerebellum. By computing conventional graph metrics, small-world network properties were verified using the weighted clustering coefficient and the characteristic path length for estimating the trade-off between segregation and integration. In addition, a connectivity metric was computed for extracting the average cost per network. The concept of the Minimum Spanning Tree (MST) was adopted and implemented in order to avoid methodological biases in graph comparisons and retain only the strongest connections per network. Subsequently, six global and three local metrics were calculated in order to retrieve useful features concerning the characteristics of each MST. Moreover, the local metrics of degree and betweenness centrality were used to detect hubs, i.e., nodes with high importance. The computed set of metrics gave rise to extensive statistical analysis in order to examine differences between low and high-IQ groups, as well as between all possible gender-based group combinations. Our results reveal that both male and female networks have small-world properties with differences in females (especially in higher IQ females) indicative of higher neural efficiency in cerebellum. There is a trend toward the same direction in men, but without significant differences. Finally, three lobules showed maximum correlation with the median response time in low-IQ individuals, implying that there is an increased effort dedicated locally by this population in cognitive tasks. PMID:28491028

  17. Resting-State Functional Connectivity and Network Analysis of Cerebellum with Respect to Crystallized IQ and Gender.

    PubMed

    Pezoulas, Vasileios C; Zervakis, Michalis; Michelogiannis, Sifis; Klados, Manousos A

    2017-01-01

    During the last years, it has been established that the prefrontal and posterior parietal brain lobes, which are mostly related to intelligence, have many connections to cerebellum. However, there is a limited research investigating cerebellum's relationship with cognitive processes. In this study, the network of cerebellum was analyzed in order to investigate its overall organization in individuals with low and high crystallized Intelligence Quotient (IQ). Functional magnetic resonance imaging (fMRI) data were selected from 136 subjects in resting-state from the Human Connectome Project (HCP) database and were further separated into two IQ groups composed of 69 low-IQ and 67 high-IQ subjects. Cerebellum was parcellated into 28 lobules/ROIs (per subject) using a standard cerebellum anatomical atlas. Thereafter, correlation matrices were constructed by computing Pearson's correlation coefficients between the average BOLD time-series for each pair of ROIs inside the cerebellum. By computing conventional graph metrics, small-world network properties were verified using the weighted clustering coefficient and the characteristic path length for estimating the trade-off between segregation and integration. In addition, a connectivity metric was computed for extracting the average cost per network. The concept of the Minimum Spanning Tree (MST) was adopted and implemented in order to avoid methodological biases in graph comparisons and retain only the strongest connections per network. Subsequently, six global and three local metrics were calculated in order to retrieve useful features concerning the characteristics of each MST. Moreover, the local metrics of degree and betweenness centrality were used to detect hubs, i.e., nodes with high importance. The computed set of metrics gave rise to extensive statistical analysis in order to examine differences between low and high-IQ groups, as well as between all possible gender-based group combinations. Our results reveal that both male and female networks have small-world properties with differences in females (especially in higher IQ females) indicative of higher neural efficiency in cerebellum. There is a trend toward the same direction in men, but without significant differences. Finally, three lobules showed maximum correlation with the median response time in low-IQ individuals, implying that there is an increased effort dedicated locally by this population in cognitive tasks.

  18. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  19. Investigation of high-speed free shear flows using improved pressure-strain correlated Reynolds stress turbulence model

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Lakshmanan, B.

    1993-01-01

    A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.

  20. Novel Assessment of Interstitial Lung Disease Using the "Computer-Aided Lung Informatics for Pathology Evaluation and Rating" (CALIPER) Software System in Idiopathic Inflammatory Myopathies.

    PubMed

    Ungprasert, Patompong; Wilton, Katelynn M; Ernste, Floranne C; Kalra, Sanjay; Crowson, Cynthia S; Rajagopalan, Srinivasan; Bartholmai, Brian J

    2017-10-01

    To evaluate the correlation between measurements from quantitative thoracic high-resolution CT (HRCT) analysis with "Computer-Aided Lung Informatics for Pathology Evaluation and Rating" (CALIPER) software and measurements from pulmonary function tests (PFTs) in patients with idiopathic inflammatory myopathies (IIM)-associated interstitial lung disease (ILD). A cohort of patients with IIM-associated ILD seen at Mayo Clinic was identified from medical record review. Retrospective analysis of HRCT data and PFTs at baseline and 1 year was performed. The abnormalities in HRCT were quantified using CALIPER software. A total of 110 patients were identified. At baseline, total interstitial abnormalities as measured by CALIPER, both by absolute volume and by percentage of total lung volume, had a significant negative correlation with diffusing capacity for carbon monoxide (DLCO), total lung capacity (TLC), and oxygen saturation. Analysis by subtype of interstitial abnormality revealed significant negative correlations between ground glass opacities (GGO) and reticular density (RD) with DLCO and TLC. At one year, changes of total interstitial abnormalities compared with baseline had a significant negative correlation with changes of TLC and oxygen saturation. A negative correlation between changes of total interstitial abnormalities and DLCO was also observed, but it was not statistically significant. Analysis by subtype of interstitial abnormality revealed negative correlations between changes of GGO and RD and changes of DLCO, TLC, and oxygen saturation, but most of the correlations did not achieve statistical significance. CALIPER measurements correlate well with functional measurements in patients with IIM-associated ILD.

  1. Analysis of rotor vibratory loads using higher harmonic pitch control

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.

    1992-01-01

    Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.

  2. The Dimensionality and Correlates of Flow in Human-Computer Interactions.

    ERIC Educational Resources Information Center

    Webster, Jane; And Others

    1993-01-01

    Defines playfulness in human-computer interactions in terms of flow theory and explores the dimensionality of the flow concept. Two studies are reported that investigated the factor structure and correlates of flow in human-computer interactions: one examined MBA students using Lotus 1-2-3 spreadsheet software, and one examined employees using…

  3. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  4. Validation of a computerized algorithm to quantify fetal heart rate deceleration area.

    PubMed

    Gyllencreutz, Erika; Lu, Ke; Lindecrantz, Kaj; Lindqvist, Pelle G; Nordstrom, Lennart; Holzmann, Malin; Abtahi, Farhad

    2018-05-16

    Reliability in visual cardiotocography interpretation is unsatisfying, which has led to development of computerized cardiotocography. Computerized analysis is well established for antenatal fetal surveillance, but has yet not performed sufficiently during labor. We aimed to investigate the capacity of a new computerized algorithm compared to visual assessment in identifying intrapartum fetal heart rate baseline and decelerations. Three-hundred-and-twelve intrapartum cardiotocography tracings with variable decelerations were analysed by the computerized algorithm and visually examined by two observers, blinded to each other and the computer analysis. The width, depth and area of each deceleration was measured. Four cases (>100 variable decelerations) were subject to in-depth detailed analysis. The outcome measures were bias in seconds (width), beats per minute (depth), and beats (area) between computer and observers by using Bland-Altman analysis. Interobserver reliability was determined by calculating intraclass correlation and Spearman rank analysis. The analysis (312 cases) showed excellent intraclass correlation (0.89-0.95) and very strong Spearman correlation (0.82-0.91). The detailed analysis of > 100 decelerations in 4 cases revealed low bias between the computer and the two observers; width 1.4 and 1.4 seconds, depth 5.1 and 0.7 beats per minute, and area 0.1 and -1.7 beats. This was comparable to the bias between the two observers; 0.3 seconds (width), 4.4 beats per minute (depth), and 1.7 beats (area). The intraclass correlation was excellent (0.90-0.98). A novel computerized algorithm for intrapartum cardiotocography analysis is as accurate as gold standard visual assessment with high correlation and low bias. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Theory of population coupling and applications to describe high order correlations in large populations of interacting neurons

    NASA Astrophysics Data System (ADS)

    Huang, Haiping

    2017-03-01

    To understand the collective spiking activity in neuronal populations, it is essential to reveal basic circuit variables responsible for these emergent functional states. Here, I develop a mean field theory for the population coupling recently proposed in the studies of the visual cortex of mouse and monkey, relating the individual neuron activity to the population activity, and extend the original form to the second order, relating neuron-pair’s activity to the population activity, to explain the high order correlations observed in the neural data. I test the computational framework on the salamander retinal data and the cortical spiking data of behaving rats. For the retinal data, the original form of population coupling and its advanced form can explain a significant fraction of two-cell correlations and three-cell correlations, respectively. For the cortical data, the performance becomes much better, and the second order population coupling reveals non-local effects in local cortical circuits.

  6. Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG

    NASA Astrophysics Data System (ADS)

    Rohe, Daniel

    2016-10-01

    The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.

  7. The correlation of methylation levels measured using Illumina 450K and EPIC BeadChips in blood samples

    PubMed Central

    Logue, Mark W; Smith, Alicia K; Wolf, Erika J; Maniates, Hannah; Stone, Annjanette; Schichman, Steven A; McGlinchey, Regina E; Milberg, William; Miller, Mark W

    2017-01-01

    Aim: We examined concordance of methylation levels across the Illumina Infinium HumanMethylation450 BeadChip and the Infinium MethylationEPIC BeadChip. Methods: We computed the correlation for 145 whole blood DNA samples at each of the 422,524 CpG sites measured by both chips. Results: The correlation at some sites was high (up to r = 0.95), but many sites had low correlation (55% had r < 0.20). The low correspondence between 450K and EPIC measured methylation values at many loci was largely due to the low variability in methylation values for the majority of the CpG sites in blood. Conclusion: Filtering out probes based on the observed correlation or low variability may increase reproducibility of BeadChip-based epidemiological studies. PMID:28809127

  8. Design and Operating Characteristics of High-Speed, Small-Bore Cylindrical-Roller Bearings

    NASA Technical Reports Server (NTRS)

    Pinel, Stanley, I.; Signer, Hans R.; Zaretsky, Erwin V.

    2000-01-01

    The computer program SHABERTH was used to analyze 35-mm-bore cylindrical roller bearings designed and manufactured for high-speed turbomachinery applications. Parametric tests of the bearings were conducted on a high-speed, high-temperature bearing tester and the results were compared with the computer predictions. Bearings with a channeled inner ring were lubricated through the inner ring, while bearings with a channeled outer ring were lubricated with oil jets. Tests were run with and without outer-ring cooling. The predicted bearing life decreased with increasing speed because of increased contact stresses caused by centrifugal load. Lower temperatures, less roller skidding, and lower power losses were obtained with channeled inner rings. Power losses calculated by the SHABERTH computer program correlated reasonably well with the test results. The Parker formula for XCAV (used in SHABERTH as a measure of oil volume in the bearing cavity) needed to be adjusted to reflect the prevailing operating conditions. The XCAV formula will need to be further refined to reflect roller bearing lubrication, ring design, cage design, and location of the cage-controlling land.

  9. MIPS: The good, the bad and the useful

    NASA Technical Reports Server (NTRS)

    Richardson, Jerry K.

    1987-01-01

    Many authors are critical of the use of MIPS (Millions of Instructions per Second) as a measure of computer power. Some feel that MIPS are meaningless. While there is justification for some of the criticism of MIPS, sometimes the criticism is carried too far. MIPS can be a useful number for planning and estimating purposes when used in a homogeneous computer environmnet. Comparisons between published MIPS ratings and benchmark results reveal that there does exist a high positive correlation between MIPS and tested performance, given a homogeneous computer environment. MIPS should be understood so as not to be misused. It is not correct that the use of MIPS is always inappropriate or inaccurate

  10. Time Correlations and the Frequency Spectrum of Sound Radiated by Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1997-01-01

    Theories of turbulent time correlations are applied to compute frequency spectra of sound radiated by isotropic turbulence and by turbulent shear flows. The hypothesis that Eulerian time correlations are dominated by the sweeping action of the most energetic scales implies that the frequency spectrum of the sound radiated by isotropic turbulence scales as omega(exp 4) for low frequencies and as omega(exp -3/4) for high frequencies. The sweeping hypothesis is applied to an approximate theory of jet noise. The high frequency noise again scales as omega(exp -3/4), but the low frequency spectrum scales as omega(exp 2). In comparison, a classical theory of jet noise based on dimensional analysis gives omega(exp -2) and omega(exp 2) scaling for these frequency ranges. It is shown that the omega(exp -2) scaling is obtained by simplifying the description of turbulent time correlations. An approximate theory of the effect of shear on turbulent time correlations is developed and applied to the frequency spectrum of sound radiated by shear turbulence. The predicted steepening of the shear dominated spectrum appears to be consistent with jet noise measurements.

  11. Information Communication Technology Policy and Public Primary Schools' Efficiency in Rwanda

    ERIC Educational Resources Information Center

    Munyengabe, Sylvestre; Haiyan, He; Yiyi, Zhao

    2018-01-01

    Teaching and learning processes have been developed through different methods and materials; nowadays the introduction of computers and other ICT tools in different forms and levels of education have been found to be highly influential in education system of different countries. The main objective of this study was to correlate Information…

  12. Alterations of bone microstructure and strength in end-stage renal failure.

    PubMed

    Trombetti, A; Stoermann, C; Chevalley, T; Van Rietbergen, B; Herrmann, F R; Martin, P-Y; Rizzoli, R

    2013-05-01

    End-stage renal disease (ESRD) patients have a high risk of fractures. We evaluated bone microstructure and finite-element analysis-estimated strength and stiffness in patients with ESRD by high-resolution peripheral computed tomography. We observed an alteration of cortical and trabecular bone microstructure and of bone strength and stiffness in ESRD patients. Fragility fractures are common in ESRD patients on dialysis. Alterations of bone microstructure contribute to skeletal fragility, independently of areal bone mineral density. We compared microstructure and finite-element analysis estimates of strength and stiffness by high-resolution peripheral quantitative computed tomography (HR-pQCT) in 33 ESRD patients on dialysis (17 females and 16 males; mean age, 47.0 ± 12.6 years) and 33 age-matched healthy controls. Dialyzed women had lower radius and tibia cortical density with higher radius cortical porosity and lower tibia cortical thickness, compared to controls. Radius trabecular number was lower with higher heterogeneity of the trabecular network. Male patients displayed only a lower radius cortical density. Radius and tibia cortical thickness correlated negatively with bone-specific alkaline phosphatase (BALP). Microstructure did not correlate with parathyroid hormone (PTH) levels. Cortical porosity correlated positively with "Kidney Disease: Improving Global Outcomes" working group PTH level categories (r = 0.36, p < 0.04). BMI correlated positively with trabecular number (r = 0.4, p < 0.02) and negatively with trabecular spacing (r = -0.37, p < 0.03) and trabecular network heterogeneity (r = -0.4, p < 0.02). Biomechanics positively correlated with BMI and negatively with BALP. Cortical and trabecular bone microstructure and calculated bone strength are altered in ESRD patients, predominantly in women. Bone microstructure and biomechanical assessment by HR-pQCT may be of major clinical relevance in the evaluation of bone fragility in ESRD patients.

  13. Proceedings of the Annual Conference of the Military Testing Association (23rd) held at Arlington, Virginia on 25-30 October 1981. Volume 2

    DTIC Science & Technology

    1981-10-01

    differentiated the high from low performers on * the criterion. Once a total score based on the differentiating items was computed, this score was... high school or worked a certain number of hours while in school, perform better on the FST, receive higher ratings on training school criteria and...closets are not related to performance on the FST, even though they would have a high probability of correlating with job proficiency measures. From

  14. Prediction Model for Relativistic Electrons at Geostationary Orbit

    NASA Technical Reports Server (NTRS)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  15. Relativistic Electrons at Geostationary Orbit: Modeling Results

    NASA Technical Reports Server (NTRS)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  16. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  17. Micro-computed tomography assessment of human alveolar bone: bone density and three-dimensional micro-architecture.

    PubMed

    Kim, Yoon Jeong; Henkin, Jeffrey

    2015-04-01

    Micro-computed tomography (micro-CT) is a valuable means to evaluate and secure information related to bone density and quality in human necropsy samples and small live animals. The aim of this study was to assess the bone density of the alveolar jaw bones in human cadaver, using micro-CT. The correlation between bone density and three-dimensional micro architecture of trabecular bone was evaluated. Thirty-four human cadaver jaw bone specimens were harvested. Each specimen was scanned with micro-CT at resolution of 10.5 μm. The bone volume fraction (BV/TV) and the bone mineral density (BMD) value within a volume of interest were measured. The three-dimensional micro architecture of trabecular bone was assessed. All the parameters in the maxilla and the mandible were subject to comparison. The variables for the bone density and the three-dimensional micro architecture were analyzed for nonparametric correlation using Spearman's rho at the significance level of p < .05. A wide range of bone density was observed. There was a significant difference between the maxilla and mandible. All micro architecture parameters were consistently higher in the mandible, up to 3.3 times greater than those in the maxilla. The most linear correlation was observed between BV/TV and BMD, with Spearman's rho = 0.99 (p = .01). Both BV/TV and BMD were highly correlated with all micro architecture parameters with Spearman's rho above 0.74 (p = .01). Two aspects of bone density using micro-CT, the BV/TV and BMD, are highly correlated with three-dimensional micro architecture parameters, which represent the quality of trabecular bone. This noninvasive method may adequately enhance evaluation of the alveolar bone. © 2013 Wiley Periodicals, Inc.

  18. Low-cost, high-performance and efficiency computational photometer design

    NASA Astrophysics Data System (ADS)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  19. Application of abstract harmonic analysis to the high-speed recognition of images

    NASA Technical Reports Server (NTRS)

    Usikov, D. A.

    1979-01-01

    Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.

  20. Investigation of Antarctic crust and upper mantle using MAGSAT and other geophysical data

    NASA Technical Reports Server (NTRS)

    Bentley, C. R. (Principal Investigator)

    1981-01-01

    Progress in processing and analysis of Investigator B MAGSAT data is reported. Data processing tasks required prior to data analysis, including translation and reformatting of tapes and development of computer routines, were performed. A scalar anomaly map of Antarctica is near completion. Data analysis included a qualitative correlation of NASA's 4/81 scalar map of Antarctica with other geopotential data and correlation of POGO and continental scale gravity data with MAGSAT data. A magnetic high was found to exist over the Ross Embayment.

  1. Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program

    NASA Astrophysics Data System (ADS)

    Manin, Yuri I.

    Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].

  2. Application of machine vision to pup loaf bread evaluation

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Chung, O. K.

    1996-12-01

    Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.

  3. Physical validation of a patient-specific contact finite element model of the ankle.

    PubMed

    Anderson, Donald D; Goldsworthy, Jane K; Li, Wendy; James Rudert, M; Tochigi, Yuki; Brown, Thomas D

    2007-01-01

    A validation study was conducted to determine the extent to which computational ankle contact finite element (FE) results agreed with experimentally measured tibio-talar contact stress. Two cadaver ankles were loaded in separate test sessions, during which ankle contact stresses were measured with a high-resolution (Tekscan) pressure sensor. Corresponding contact FE analyses were subsequently performed for comparison. The agreement was good between FE-computed and experimentally measured mean (3.2% discrepancy for one ankle, 19.3% for the other) and maximum (1.5% and 6.2%) contact stress, as well as for contact area (1.7% and 14.9%). There was also excellent agreement between histograms of fractional areas of cartilage experiencing specific ranges of contact stress. Finally, point-by-point comparisons between the computed and measured contact stress distributions over the articular surface showed substantial agreement, with correlation coefficients of 90% for one ankle and 86% for the other. In the past, general qualitative, but little direct quantitative agreement has been demonstrated with articular joint contact FE models. The methods used for this validation enable formal comparison of computational and experimental results, and open the way for objective statistical measures of regional correlation between FE-computed contact stress distributions from comparison articular joint surfaces (e.g., those from an intact versus those with residual intra-articular fracture incongruity).

  4. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.

  5. Empirical comparison of local structural similarity indices for collaborative-filtering-based recommender systems

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan

    2010-08-01

    Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.

  6. Climate change and the detection of trends in annual runoff

    USGS Publications Warehouse

    McCabe, G.J.; Wolock, D.M.

    1997-01-01

    This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.

  7. Physically motivated correlation formalism in hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Roy, Ankita; Rafert, J. Bruce

    2004-05-01

    Most remote sensing data-sets contain a limiting number of independent spatial and spectral measurements, beyond which no effective increase in information is achieved. This paper presents a Physically Motivated Correlation Formalism (PMCF) ,which places both Spatial and Spectral data on an equivalent mathematical footing in the context of a specific Kernel, such that, optimal combinations of independent data can be selected from the entire Hypercube via the method of "Correlation Moments". We present an experimental and computational analysis of Hyperspectral data sets using the Michigan Tech VFTHSI [Visible Fourier Transform Hyperspectral Imager] based on a Sagnac Interferometer, adjusted to obtain high SNR levels. The captured Signal Interferograms of different targets - aerial snaps of Houghton and lab-based data (white light , He-Ne laser , discharge tube sources) with the provision of customized scan of targets with the same exposures are processed using inverse imaging transformations and filtering techniques to obtain the Spectral profiles and generate Hypercubes to compute Spectral/Spatial/Cross Moments. PMCF answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required for a particular target recognition.

  8. A cross-sectional evaluation of computer literacy among medical students at a tertiary care teaching hospital in Mumbai, Bombay.

    PubMed

    Panchabhai, T S; Dangayach, N S; Mehta, V S; Patankar, C V; Rege, N N

    2011-01-01

    Computer usage capabilities of medical students for introduction of computer-aided learning have not been adequately assessed. Cross-sectional study to evaluate computer literacy among medical students. Tertiary care teaching hospital in Mumbai, India. Participants were administered a 52-question questionnaire, designed to study their background, computer resources, computer usage, activities enhancing computer skills, and attitudes toward computer-aided learning (CAL). The data was classified on the basis of sex, native place, and year of medical school, and the computer resources were compared. The computer usage and attitudes toward computer-based learning were assessed on a five-point Likert scale, to calculate Computer usage score (CUS - maximum 55, minimum 11) and Attitude score (AS - maximum 60, minimum 12). The quartile distribution among the groups with respect to the CUS and AS was compared by chi-squared tests. The correlation between CUS and AS was then tested. Eight hundred and seventy-five students agreed to participate in the study and 832 completed the questionnaire. One hundred and twenty eight questionnaires were excluded and 704 were analyzed. Outstation students had significantly lesser computer resources as compared to local students (P<0.0001). The mean CUS for local students (27.0±9.2, Mean±SD) was significantly higher than outstation students (23.2±9.05). No such difference was observed for the AS. The means of CUS and AS did not differ between males and females. The CUS and AS had positive, but weak correlations for all subgroups. The weak correlation between AS and CUS for all students could be explained by the lack of computer resources or inadequate training to use computers for learning. Providing additional resources would benefit the subset of outstation students with lesser computer resources. This weak correlation between the attitudes and practices of all students needs to be investigated. We believe that this gap can be bridged with a structured computer learning program.

  9. Predictive coupled-cluster isomer orderings for some Si{sub n}C{sub m} (m, n ≤ 12) clusters: A pragmatic comparison between DFT and complete basis limit coupled-cluster benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrd, Jason N., E-mail: byrd.jason@ensco.com; ENSCO, Inc., 4849 North Wickham Road, Melbourne, Florida 32940; Lutz, Jesse J., E-mail: jesse.lutz.ctr@afit.edu

    The accurate determination of the preferred Si{sub 12}C{sub 12} isomer is important to guide experimental efforts directed towards synthesizing SiC nano-wires and related polymer structures which are anticipated to be highly efficient exciton materials for the opto-electronic devices. In order to definitively identify preferred isomeric structures for silicon carbon nano-clusters, highly accurate geometries, energies, and harmonic zero point energies have been computed using coupled-cluster theory with systematic extrapolation to the complete basis limit for set of silicon carbon clusters ranging in size from SiC{sub 3} to Si{sub 12}C{sub 12}. It is found that post-MBPT(2) correlation energy plays a significant rolemore » in obtaining converged relative isomer energies, suggesting that predictions using low rung density functional methods will not have adequate accuracy. Utilizing the best composite coupled-cluster energy that is still computationally feasible, entailing a 3-4 SCF and coupled-cluster theory with singles and doubles extrapolation with triple-ζ (T) correlation, the closo Si{sub 12}C{sub 12} isomer is identified to be the preferred isomer in the support of previous calculations [X. F. Duan and L. W. Burggraf, J. Chem. Phys. 142, 034303 (2015)]. Additionally we have investigated more pragmatic approaches to obtaining accurate silicon carbide isomer energies, including the use of frozen natural orbital coupled-cluster theory and several rungs of standard and double-hybrid density functional theory. Frozen natural orbitals as a way to compute post-MBPT(2) correlation energy are found to be an excellent balance between efficiency and accuracy.« less

  10. An accurate global potential energy surface, dipole moment surface, and rovibrational frequencies for NH3

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2008-12-01

    A global potential energy surface (PES) that includes short and long range terms has been determined for the NH3 molecule. The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations and the internally contracted averaged coupled-pair functional electronic structure methods have been used in conjunction with very large correlation-consistent basis sets, including diffuse functions. Extrapolation to the one-particle basis set limit was performed and core correlation and scalar relativistic contributions were included directly, while the diagonal Born-Oppenheimer correction was added. Our best purely ab initio PES, denoted "mixed," is constructed from two PESs which differ in whether the ic-ACPF higher-order correlation correction was added or not. Rovibrational transition energies computed from the mixed PES agree well with experiment and the best previous theoretical studies, but most importantly the quality does not deteriorate even up to 10300cm-1 above the zero-point energy (ZPE). The mixed PES was improved further by empirical refinement using the most reliable J =0-2 rovibrational transitions in the HITRAN 2004 database. Agreement between high-resolution experiment and rovibrational transition energies computed from our refined PES for J =0-6 is excellent. Indeed, the root mean square (rms) error for 13 HITRAN 2004 bands for J =0-2 is 0.023cm-1 and that for each band is always ⩽0.06cm-1. For J =3-5 the rms error is always ⩽0.15cm-1. This agreement means that transition energies computed with our refined PES should be useful in the assignment of new high-resolution NH3 spectra and in correcting mistakes in previous assignments. Ideas for further improvements to our refined PES and for extension to other isotopolog are discussed.

  11. Correlation Educational Model in Primary Education Curriculum of Mathematics and Computer Science

    ERIC Educational Resources Information Center

    Macinko Kovac, Maja; Eret, Lidija

    2012-01-01

    This article gives insight into methodical correlation model of teaching mathematics and computer science. The model shows the way in which the related areas of computer science and mathematics can be supplemented, if it transforms the way of teaching and creates a "joint" lessons. Various didactic materials are designed, in which all…

  12. Reflectance measurements for the detection and mapping of soil limitations

    NASA Technical Reports Server (NTRS)

    Benson, L. A.; Frazee, C. J.

    1973-01-01

    During 1971 and 1972 research was conducted on two fallow fields in the proposed Oahe Irrigation Project to investigate the relationship between the tonal variations observed on aerial photographs and the principal soil limitations of the area. A grid sampling procedure was used to collected detailed field data during the 1972 growing season. The field data was compared to imagery collected on May 14, 1971 at 3050 meters altitude. The imagery and field data were initially evaluated by a visual analysis. Correlation and regression analysis revealed a highly significant correlation and regression analysis revealed a highly significant correlation between the digitized color infrared film data and soil properties such as organic matter content, color, depth to carbonates, bulk density and reflectivity. Computer classification of the multiemulsion film data resulted in maps delineating the areas containing claypan and erosion limitations. Reflectance data from the red spectral band provided the best results.

  13. Geophysical Analysis of Major Geothermal Anomalies in Romania

    NASA Astrophysics Data System (ADS)

    Panea, Ionelia; Mocanu, Victor

    2017-11-01

    The Romanian segment of the Eastern Pannonian Basin and the Moesian Platform are known for their geothermal and hydrocarbon-bearing structures. We used seismic, gravity, and geothermal data to analyze the geothermal behavior in the Oradea and Timisoara areas, from the Romanian segment of Eastern Pannonian Basin, and the Craiova-Bals-Optasi area, from the Moesian Platform. We processed 22 seismic reflection data sets recorded in the Oradea and Timisoara areas to obtain P-wave velocity distributions and time seismic sections. The P-wave velocity distributions correlate well with the structural trends observed along the seismic lines. We observed a good correlation between the high areas of crystalline basement seen on the time seismic sections and the high heat flow and gravity-anomaly values. For the Craiova-Bals-Optasi area, we computed a three-dimensional (3D) temperature model using calculated and measured temperature and geothermal gradient values in wells with an irregular distribution on the territory. The high temperatures from the Craiova-Bals-Optasi area correlate very well with the uplifted basement blocks seen on the time seismic sections and high gravity-anomaly values.

  14. Development and Initial Validation of an Instrument to Measure Physicians' Use of, Knowledge about, and Attitudes Toward Computers

    PubMed Central

    Cork, Randy D.; Detmer, William M.; Friedman, Charles P.

    1998-01-01

    This paper describes details of four scales of a questionnaire—“Computers in Medical Care”—measuring attributes of computer use, self-reported computer knowledge, computer feature demand, and computer optimism of academic physicians. The reliability (i.e., precision, or degree to which the scale's result is reproducible) and validity (i.e., accuracy, or degree to which the scale actually measures what it is supposed to measure) of each scale were examined by analysis of the responses of 771 full-time academic physicians across four departments at five academic medical centers in the United States. The objectives of this paper were to define the psychometric properties of the scales as the basis for a future demonstration study and, pending the results of further validity studies, to provide the questionnaire and scales to the medical informatics community as a tool for measuring the attitudes of health care providers. Methodology: The dimensionality of each scale and degree of association of each item with the attribute of interest were determined by principal components factor analysis with othogonal varimax rotation. Weakly associated items (factor loading <.40) were deleted. The reliability of each resultant scale was computed using Cronbach's alpha coefficient. Content validity was addressed during scale construction; construct validity was examined through factor analysis and by correlational analyses. Results: Attributes of computer use, computer knowledge, and computer optimism were unidimensional, with the corresponding scales having reliabilities of.79,.91, and.86, respectively. The computer-feature demand attribute differentiated into two dimensions: the first reflecting demand for high-level functionality with reliability of.81 and the second demand for usability with reliability of.69. There were significant positive correlations between computer use, computer knowledge, and computer optimism scale scores and respondents' hands-on computer use, computer training, and self-reported computer sophistication. In addition, items posited on the computer knowledge scale to be more difficult generated significantly lower scores. Conclusion: The four scales of the questionnaire appear to measure with adequate reliability five attributes of academic physicians' attitudes toward computers in medical care: computer use, self-reported computer knowledge, demand for computer functionality, demand for computer usability, and computer optimism. Results of initial validity studies are positive, but further validation of the scales is needed. The URL of a downloadable HTML copy of the questionnaire is provided. PMID:9524349

  15. Long-Term Structural Health Monitoring System for a High-Speed Railway Bridge Structure

    PubMed Central

    Wu, Lai-Yi

    2015-01-01

    Nanjing Dashengguan Bridge, which serves as the shared corridor crossing Yangtze River for both Beijing-Shanghai high-speed railway and Shanghai-Wuhan-Chengdu railway, is the first 6-track high-speed railway bridge with the longest span throughout the world. In order to ensure safety and detect the performance deterioration during the long-time service of the bridge, a Structural Health Monitoring (SHM) system has been implemented on this bridge by the application of modern techniques in sensing, testing, computing, and network communication. The SHM system includes various sensors as well as corresponding data acquisition and transmission equipment for automatic data collection. Furthermore, an evaluation system of structural safety has been developed for the real-time condition assessment of this bridge. The mathematical correlation models describing the overall structural behavior of the bridge can be obtained with the support of the health monitoring system, which includes cross-correlation models for accelerations, correlation models between temperature and static strains of steel truss arch, and correlation models between temperature and longitudinal displacements of piers. Some evaluation results using the mean value control chart based on mathematical correlation models are presented in this paper to show the effectiveness of this SHM system in detecting the bridge's abnormal behaviors under the varying environmental conditions such as high-speed trains and environmental temperature. PMID:26451387

  16. Long sequence correlation coprocessor

    NASA Astrophysics Data System (ADS)

    Gage, Douglas W.

    1994-09-01

    A long sequence correlation coprocessor (LSCC) accelerates the bitwise correlation of arbitrarily long digital sequences by calculating in parallel the correlation score for 16, for example, adjacent bit alignments between two binary sequences. The LSCC integrated circuit is incorporated into a computer system with memory storage buffers and a separate general purpose computer processor which serves as its controller. Each of the LSCC's set of sequential counters simultaneously tallies a separate correlation coefficient. During each LSCC clock cycle, computer enable logic associated with each counter compares one bit of a first sequence with one bit of a second sequence to increment the counter if the bits are the same. A shift register assures that the same bit of the first sequence is simultaneously compared to different bits of the second sequence to simultaneously calculate the correlation coefficient by the different counters to represent different alignments of the two sequences.

  17. Correlation between Standardized Uptake Value of 68Ga-DOTA-NOC Positron Emission Tomography/Computed Tomography and Pathological Classification of Neuroendocrine Tumors.

    PubMed

    Kaewput, Chalermrat; Suppiah, Subapriya; Vinjamuri, Sobhan

    2018-01-01

    The aim of our study was to correlate tumor uptake of 68 Ga-DOTA-NOC positron emission tomography/computed tomography (PET/CT) with the pathological grade of neuroendocrine tumors (NETs). 68 Ga-DOTA-NOC PET/CT examinations in 41 patients with histopathologically proven NETs were included in the study. Maximum standardized uptake value (SUV max ) and averaged SUV SUV mean of "main tumor lesions" were calculated for quantitative analyses after background subtraction. Uptake on main tumor lesions was compared and correlated with the tumor histological grade based on Ki-67 index and pathological differentiation. Classification was performed into three grades according to Ki-67 levels; low grade: Ki-67 <2, intermediate grade: Ki-67 3-20, and high grade: Ki-67 >20. Pathological differentiation was graded into well- and poorly differentiated groups. The values were compared and evaluated for correlation and agreement between the two parameters was performed. Our study revealed negatively fair agreement between SUV max of tumor and Ki-67 index ( r = -0.241) and negatively poor agreement between SUV mean of tumor and Ki-67 index ( r = -0.094). SUV max of low-grade, intermediate-grade, and high-grade Ki-67 index is 26.18 ± 14.56, 30.71 ± 24.44, and 6.60 ± 4.59, respectively. Meanwhile, SUV mean of low-grade, intermediate-grade, and high-grade Ki-67 is 8.92 ± 7.15, 9.09 ± 5.18, and 3.00 ± 1.38, respectively. As expected, there was statistically significant decreased SUV max and SUV mean in high-grade tumors (poorly differentiated NETs) as compared with low- and intermediate-grade tumors (well-differentiated NETs). SUV of 68 Ga-DOTA-NOC PET/CT is not correlated with histological grade of NETs. However, there was statistically significant decreased tumor uptake of 68 Ga-DOTA-NOC in poorly differentiated NETs as compared with the well-differentiated group. As a result of this pilot study, we confirm that the lower tumor uptake of 68 Ga-DOTA-NOC may be associated with aggressive behavior and may, therefore, result in poor prognosis.

  18. Correlation of quantitative dual-energy computed tomography iodine maps and abdominal computed tomography perfusion measurements: are single-acquisition dual-energy computed tomography iodine maps more than a reduced-dose surrogate of conventional computed tomography perfusion?

    PubMed

    Stiller, Wolfram; Skornitzke, Stephan; Fritz, Franziska; Klauss, Miriam; Hansen, Jens; Pahn, Gregor; Grenacher, Lars; Kauczor, Hans-Ulrich

    2015-10-01

    Study objectives were the quantitative evaluation of whether conventional abdominal computed tomography (CT) perfusion measurements mathematically correlate with quantitative single-acquisition dual-energy CT (DECT) iodine concentration maps, the determination of the optimum time of acquisition for achieving maximum correlation, and the estimation of the potential for radiation exposure reduction when replacing conventional CT perfusion by single-acquisition DECT iodine concentration maps. Dual-energy CT perfusion sequences were dynamically acquired over 51 seconds (34 acquisitions every 1.5 seconds) in 24 patients with histologically verified pancreatic carcinoma using dual-source DECT at tube potentials of 80 kVp and 140 kVp. Using software developed in-house, perfusion maps were calculated from 80-kVp image series using the maximum slope model after deformable motion correction. In addition, quantitative iodine maps were calculated for each of the 34 DECT acquisitions per patient. Within a manual segmentation of the pancreas, voxel-by-voxel correlation between the perfusion map and each of the iodine maps was calculated for each patient to determine the optimum time of acquisition topt defined as the acquisition time of the iodine map with the highest correlation coefficient. Subsequently, regions of interest were placed inside the tumor and inside healthy pancreatic tissue, and correlation between mean perfusion values and mean iodine concentrations within these regions of interest at topt was calculated for the patient sample. The mean (SD) topt was 31.7 (5.4) seconds after the start of contrast agent injection. The mean (SD) perfusion values for healthy pancreatic and tumor tissues were 67.8 (26.7) mL per 100 mL/min and 43.7 (32.2) mL per 100 mL/min, respectively. At topt, the mean (SD) iodine concentrations were 2.07 (0.71) mg/mL in healthy pancreatic and 1.69 (0.98) mg/mL in tumor tissue, respectively. Overall, the correlation between perfusion values and iodine concentrations was high (0.77), with correlation of 0.89 in tumor and of 0.56 in healthy pancreatic tissue at topt. Comparing radiation exposure associated with a single DECT acquisition at topt (0.18 mSv) to that of an 80 kVp CT perfusion sequence (2.96 mSv) indicates that an average reduction of Deff by 94% could be achieved by replacing conventional CT perfusion with a single-acquisition DECT iodine concentration map. Quantitative iodine concentration maps obtained with DECT correlate well with conventional abdominal CT perfusion measurements, suggesting that quantitative iodine maps calculated from a single DECT acquisition at an organ-specific and patient-specific optimum time of acquisition might be able to replace conventional abdominal CT perfusion measurements if the time of acquisition is carefully calibrated. This could lead to large reductions of radiation exposure to the patients while offering quantitative perfusion data for diagnosis.

  19. The use of computer-assisted image analysis in the evaluation of the effect of management systems on changes in the color, chemical composition and texture of m. longissimus dorsi in pigs.

    PubMed

    Zapotoczny, Piotr; Kozera, Wojciech; Karpiesiuk, Krzysztof; Pawłowski, Rodian

    2014-08-01

    The effect of management systems on selected physical properties and chemical composition of m. longissimus dorsi was studied in pigs. Muscle texture parameters were determined by computer-assisted image analysis, and the color of muscle samples was evaluated using a spectrophotometer. Highly significant correlations were observed between chemical composition and selected texture variables in the analyzed images. Chemical composition was not correlated with color or spectral distribution. Subject to the applied classification methods and groups of variables included in the classification model, the experimental groups were identified correctly in 35-95%. No significant differences in the chemical composition of m. longissimus dorsi were observed between experimental groups. Significant differences were noted in color lightness (L*) and redness (a*). Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Effect of microvascular distribution and its density on interstitial fluid pressure in solid tumors: A computational model.

    PubMed

    Mohammadi, M; Chen, P

    2015-09-01

    Solid tumors with different microvascular densities (MVD) have been shown to have different outcomes in clinical studies. Other studies have demonstrated the significant correlation between high MVD, elevated interstitial fluid pressure (IFP) and metastasis in cancers. Elevated IFP in solid tumors prevents drug macromolecules reaching most cancerous cells. To overcome this barrier, antiangiogenesis drugs can reduce MVD within the tumor and lower IFP. A quantitative approach is essential to compute how much reduction in MVD is required for a specific tumor to reach a desired amount of IFP for drug delivery purposes. Here we provide a computational framework to investigate how IFP is affected by the tumor size, the MVD, and location of vessels within the tumor. A general physiologically relevant tumor type with a heterogenous vascular structure surrounded by normal tissue is utilized. Then the continuity equation, Darcy's law, and Starling's equation are applied in the continuum mechanics model, which can calculate IFP for different cases of solid tumors. High MVD causes IFP elevation in solid tumors, and IFP distribution correlates with microvascular distribution within tumor tissue. However, for tumors with constant MVD but different microvascular structures, the average values of IFP were found to be the same. Moreover, for a constant MVD and vascular distribution, an increase in tumor size leads to increased IFP. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Analysis of in-flight boundary-layer state measurements on a subsonic transport wing in high-lift configuration

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Los, S. M.; Miley, S. J.; Yip, L. P.; Banks, D. W.; Roback, V. E.; Bertelrud, A.

    1995-01-01

    Flight experiments on NASA Langley's B737-100 (TSRV) airplane have been conducted to document flow characteristics in order to further the understanding of high-lift flow physics, and to correlate and validate computational predictions and wind-tunnel measurements. The project is a cooperative effort involving NASA, industry, and universities. In addition to focusing on in-flight measurements, the project includes extensive application of various computational techniques, and correlation of flight data with computational results and wind-tunnel measurements. Results obtained in the most recent phase of flight experiments are analyzed and presented in this paper. In-flight measurements include surface pressure distributions, measured using flush pressure taps and pressure belts on the slats, main element, and flap elements; surface shear stresses, measured using Preston tubes; off-surface velocity distributions, measured using shear-layer rakes; aeroelastic deformations of the flap elements, measured using an optical positioning system; and boundary-layer transition phenomena, measured using hot-film anemometers and an infrared imaging system. The analysis in this paper primarily focuses on changes in the boundary-layer state that occurred on the slats, main element, and fore flap as a result of changes in flap setting and/or flight condition. Following a detailed description of the experiment, the boundary-layer state phenomenon will be discussed based on data measured during these recent flight experiments.

  2. Degradation of metallic materials studied by correlative tomography

    NASA Astrophysics Data System (ADS)

    Burnett, T. L.; Holroyd, N. J. H.; Lewandowski, J. J.; Ogurreck, M.; Rau, C.; Kelley, R.; Pickering, E. J.; Daly, M.; Sherry, A. H.; Pawar, S.; Slater, T. J. A.; Withers, P. J.

    2017-07-01

    There are a huge array of characterization techniques available today and increasingly powerful computing resources allowing for the effective analysis and modelling of large datasets. However, each experimental and modelling tool only spans limited time and length scales. Correlative tomography can be thought of as the extension of correlative microscopy into three dimensions connecting different techniques, each providing different types of information, or covering different time or length scales. Here the focus is on the linking of time lapse X-ray computed tomography (CT) and serial section electron tomography using the focussed ion beam (FIB)-scanning electron microscope to study the degradation of metals. Correlative tomography can provide new levels of detail by delivering a multiscale 3D picture of key regions of interest. Specifically, the Xe+ Plasma FIB is used as an enabling tool for large-volume high-resolution serial sectioning of materials, and also as a tool for preparation of microscale test samples and samples for nanoscale X-ray CT imaging. The exemplars presented illustrate general aspects relating to correlative workflows, as well as to the time-lapse characterisation of metal microstructures during various failure mechanisms, including ductile fracture of steel and the corrosion of aluminium and magnesium alloys. Correlative tomography is already providing significant insights into materials behaviour, linking together information from different instruments across different scales. Multiscale and multifaceted work flows will become increasingly routine, providing a feed into multiscale materials models as well as illuminating other areas, particularly where hierarchical structures are of interest.

  3. "Blind spots" in forensic autopsy: improved detection of retrobulbar hemorrhage and orbital lesions by postmortem computed tomography (PMCT).

    PubMed

    Flach, P M; Egli, T C; Bolliger, S A; Berger, N; Ampanozi, G; Thali, M J; Schweitzer, W

    2014-09-01

    The purpose of this study was to correlate the occurrence of retrobulbar hemorrhage (RBH) with mechanism of injury, external signs and autopsy findings to postmortem computed tomography (PMCT). Six-teen subjects presented with RBH and underwent PMCT, external inspection and conventional autopsy. External inspection was evaluated for findings of the bulbs, black eye, raccoon eyes and Battle's sign. Fractures of the viscerocranium, orbital lesions and RBH were evaluated by PMCT. Autopsy and PMCT was evaluated for orbital roof and basilar skull fracture. The leading manner of death was accident with central regulatory failure in cases of RBH (31.25%). Imaging showed a high sensitivity in detection of orbital roof and basilar skull fractures (100%), but was less specific compared to autopsy. Volume of RBH (0.1-2.4ml) correlated positively to the presence of Battle's sign (p<0.06) and the postmortem interval. Ecchymosis on external inspection correlated with RBH. There was a statistical significant correlation between bulbar lesion and RBH. Orbital roof fracture count weakly correlated with the total PMCT derived RBH volume. Maxillary hemosinus correlated to maxillary fractures, but not to RBH. RBH are a specific finding in forensically relevant head trauma. PMCT is an excellent tool in detecting and quantifying morphological trauma findings particularly in the viscerocranium, one of the most relevant "blind spots" of classic autopsy. PMCT was superior in detecting osseous lesions, scrutinizing autopsy as the gold standard. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  5. Deriving Lifetime Maps in the Time/Frequency Domain of Coherent Structures in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan

    2008-01-01

    The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.

  6. CT-derived indices of canine osteosarcoma-affected antebrachial strength.

    PubMed

    Garcia, Tanya C; Steffey, Michele A; Zwingenberger, Allison L; Daniel, Leticia; Stover, Susan M

    2017-05-01

    To improve the prediction of fractures in dogs with bone tumors of the distal radius by identifying computed tomography (CT) indices that correlate with antebrachial bone strength and fracture location. Prospective experimental study. Dogs with antebrachial osteosarcoma (n = 10), and normal cadaver bones (n=9). Antebrachia were imaged with quantitative CT prior to biomechanical testing to failure. CT indices of structural properties were compared to yield force and maximum force using Pearson correlation tests. Straight beam failure (Fs), axial rigidity, curved beam failure (Fc), and craniocaudal bending moment of inertia (MOICrCd) CT indices most highly correlated (0.77 > R > 0.57) with yield and maximum forces when iOSA-affected and control bones were included in the analysis. Considering only OSA-affected bones, Fs, Fc, and axial rigidity correlated highly (0.85 > R > 0.80) with maximum force. In affected bones, the location of minimum axial rigidity and maximum MOICrCd correlated highly (R > 0.85) with the actual fracture location. CT-derived axial rigidity, Fs, and MOICrCd have strong linear relationships with yield and maximum force. These indices should be further evaluated prospectively in OSA-affected dogs that do, and do not, experience pathologic fracture. © 2017 The American College of Veterinary Surgeons.

  7. [Gemstone computed tomography in the evaluation of material distribution in pulmonary parenchyma for pulmonary embolism].

    PubMed

    Zhang, Lan; Lü, Lei; Wu, Hua-wei; Zhang, Hao; Zhang, Ji-wei

    2011-12-06

    To present our initial experiences with pulmonary high-definition multidetector computed tomography (HDCT) in patients with acute venous thromboembolism (AVTE) to evaluate their corresponding clinical manifestations. Since December 2009 to March 2010, 23 AVTE patients underwent HDCT at our hospital. Pulmonary embolism (PE) was diagnosed based on the 3D-reconstructed images of computed tomography pulmonary angiography (CTPA). The post processed data were collected by spectral imaging system software to detect the iodine distribution maps. Perfusion defects, calculated as the values of iodine content, were compared with those of normal lung parenchymal perfusion in the absence of PE. Among them, 14 AVTE patients were definitely diagnosed with PE. Prior to anticoagulant therapy, their values of iodine content in defective perfusion area were significantly lower than those in normal perfusion area. After a 3-month anticoagulant therapy, the values of iodine content for the defective perfusion area increased significantly (P < 0.05). There was no significant correlation between the values of iodine content for segmental/subsegmental filling defect area and clinical risk score of DVT (r = 2.68, P > 0.05). But there was a significant negative correlation between the values of iodine content for segmental/subsegmental filling defection area and clinical probability score of PE (r = 0.78, P < 0.05). HDCT is a promising modality of visualizing pulmonary microvasculature as a correlative manifestation of regional perfusion. PE results in hypoperfusion with decreased values of iodine content in affected lung parenchyma. Hemodynamic changes in affected areas correlate with the severity of clinical manifestations of PE.

  8. Forensic postmortem computed tomography: volumetric measurement of the heart and liver.

    PubMed

    Jakobsen, Lykke Schrøder; Lundemose, Sissel; Banner, Jytte; Lynnerup, Niels; Jacobsen, Christina

    2016-12-01

    The purpose of this study was to investigate the utility of postmortem computed tomography (PMCT) images in estimating organ sizes and to examine the use of the cardiothoracic ratio (CTR). We included 45 individuals (19 females), who underwent a medico-legal autopsy. Using the computer software program Mimics ® , we determined in situ heart and liver volumes derived from linear measurements (width, height and depth) on a whole body PMCT-scan, and compared the volumes with ex vivo volumes derived by CT-scan of the eviscerated heart and liver. The ex vivo volumes were also compared with the organ weights. Further, we compared the CTR with the ex vivo heart volume and a heart weight-ratio (HWR). Intra- and inter-observer analyses were performed. We found no correlation between the in situ and ex vivo volumes of the heart and liver. However, a highly significant correlation was found between the ex vivo volumes and weights of the heart and liver. No correlations between CTR and the ex vivo heart volume nor with HWR was found. Concerning cardiomegaly, we found no agreement between the CTR and HWR. The intra- and inter-observer analyses showed no significant differences. Noninvasive in situ PMCT methods for organ measuring, as performed in this study, are not useful tools in forensic pathology. The best method to estimate organ volume is a CT-scan of the eviscerated organ. PMCT-determined CTR seems to be useless for ascertaining cardiomegaly, as it neither correlated with the ex vivo heart volume nor with the HWR.

  9. Nurses' computer literacy and attitudes towards the use of computers in health care.

    PubMed

    Gürdaş Topkaya, Sati; Kaya, Nurten

    2015-05-01

    This descriptive and cross-sectional study was designed to address nurses' computer literacy and attitudes towards the use of computers in health care and to determine the correlation between these two variables. This study was conducted with the participation of 688 nurses who worked at two university-affiliated hospitals. These nurses were chosen using a stratified random sampling method. The data were collected using the Multicomponent Assessment of Computer Literacy and the Pretest for Attitudes Towards Computers in Healthcare Assessment Scale v. 2. The nurses, in general, had positive attitudes towards computers, and their computer literacy was good. Computer literacy in general had significant positive correlations with individual elements of computer competency and with attitudes towards computers. If the computer is to be an effective and beneficial part of the health-care system, it is necessary to help nurses improve their computer competency. © 2014 Wiley Publishing Asia Pty Ltd.

  10. Correlation of HIFiRE-5 Flight Data with Computed Pressure and Heat Transfer (Postprint)

    DTIC Science & Technology

    2015-06-01

    AFRL-RQ-WP-TP-2015-0149 CORRELATION OF HIFiRE-5 FLIGHT DATA WITH COMPUTED PRESSURE AND HEAT TRANSFER (POSTPRINT) Joseph S. Jewell...results with St was compared to flight heat transfer measurements, and transition locations were inferred. Finally, a computational heat conduction...HIFiRE-5 Flight Data With Computed Pressure and Heat Transfer Joseph S. Jewell,1 James H. Miller,2 and Roger L. Kimmel3 U.S. Air Force Research

  11. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  12. The efficacy of World Wide Web-mediated microcomputer-based laboratory activities in the high school physics classroom

    NASA Astrophysics Data System (ADS)

    Slykhuis, David A.

    This research project examined the efficacy of an online microcomputer-based laboratory based (MBL) physics unit. One hundred and fifty physics students from five high schools in North Carolina were divided into online and classroom groups. The classroom group completed the MBL unit in small groups with assistance from their teachers. The online groups completed the MBL unit in small groups using a website designed for this project for guidance. Pre- and post-unit content specific tests and surveys were given. Statistical analysis of the content tests showed significant development of conceptual understanding by the online group over the course of the unit. There was not a significant difference between the classroom and online group with relation to the amount of conceptual understanding developed. Correlations with post-test achievement showed that pre-test scores and math background were the most significant correlates with success. Computer related variables, such as computer comfort and online access, were only mildly correlated with the online group. Students' views about the nature of physics were not well developed prior to the unit and did not significantly change over the course of the unit. Examination of the students' physics conceptions after instruction revealed common alternative conceptions such as confusing position and velocity variables and incorrect interpretations of graphical features such as slope.

  13. Metabolism and development – integration of micro computed tomography data and metabolite profiling reveals metabolic reprogramming from floral initiation to silique development

    PubMed Central

    Bellaire, Anke; Ischebeck, Till; Staedler, Yannick; Weinhaeuser, Isabell; Mair, Andrea; Parameswaran, Sriram; Ito, Toshiro; Schönenberger, Jürg; Weckwerth, Wolfram

    2014-01-01

    The interrelationship of morphogenesis and metabolism is a poorly studied phenomenon. The main paradigm is that development is controlled by gene expression. The aim of the present study was to correlate metabolism to early and late stages of flower and fruit development in order to provide the basis for the identification of metabolic adjustment and limitations. A highly detailed picture of morphogenesis is achieved using nondestructive micro computed tomography. This technique was used to quantify morphometric parameters of early and late flower development in an Arabidopsis thaliana mutant with synchronized flower initiation. The synchronized flower phenotype made it possible to sample enough early floral tissue otherwise not accessible for metabolomic analysis. The integration of metabolomic and morphometric data enabled the correlation of metabolic signatures with the process of flower morphogenesis. These signatures changed significantly during development, indicating a pronounced metabolic reprogramming in the tissue. Distinct sets of metabolites involved in these processes were identified and were linked to the findings of previous gene expression studies of flower development. High correlations with basic leucine zipper (bZIP) transcription factors and nitrogen metabolism genes involved in the control of metabolic carbon : nitrogen partitioning were revealed. Based on these observations a model for metabolic adjustment during flower development is proposed. PMID:24350948

  14. Algorithmic implementation of particle-particle ladder diagram approximation to study strongly-correlated metals and semiconductors

    NASA Astrophysics Data System (ADS)

    Prayogi, A.; Majidi, M. A.

    2017-07-01

    In condensed-matter physics, strongly-correlated systems refer to materials that exhibit variety of fascinating properties and ordered phases, depending on temperature, doping, and other factors. Such unique properties most notably arise due to strong electron-electron interactions, and in some cases due to interactions involving other quasiparticles as well. Electronic correlation effects are non-trivial that one may need a sufficiently accurate approximation technique with quite heavy computation, such as Quantum Monte-Carlo, in order to capture particular material properties arising from such effects. Meanwhile, less accurate techniques may come with lower numerical cost, but the ability to capture particular properties may highly depend on the choice of approximation. Among the many-body techniques derivable from Feynman diagrams, we aim to formulate algorithmic implementation of the Ladder Diagram approximation to capture the effects of electron-electron interactions. We wish to investigate how these correlation effects influence the temperature-dependent properties of strongly-correlated metals and semiconductors. As we are interested to study the temperature-dependent properties of the system, the Ladder diagram method needs to be applied in Matsubara frequency domain to obtain the self-consistent self-energy. However, at the end we would also need to compute the dynamical properties like density of states (DOS) and optical conductivity that are defined in the real frequency domain. For this purpose, we need to perform the analytic continuation procedure. At the end of this study, we will test the technique by observing the occurrence of metal-insulator transition in strongly-correlated metals, and renormalization of the band gap in strongly-correlated semiconductors.

  15. Relationship between delta power and the electrocardiogram-derived cardiopulmonary spectrogram: possible implications for assessing the effectiveness of sleep.

    PubMed

    Thomas, Robert Joseph; Mietus, Joseph E; Peng, Chung-Kang; Guo, Dan; Gozal, David; Montgomery-Downs, Hawley; Gottlieb, Daniel J; Wang, Cheng-Yen; Goldberger, Ary L

    2014-01-01

    The physiologic relationship between slow-wave activity (SWA) (0-4 Hz) on the electroencephalogram (EEG) and high-frequency (0.1-0.4 Hz) cardiopulmonary coupling (CPC) derived from electrocardiogram (ECG) sleep spectrograms is not known. Because high-frequency CPC appears to be a biomarker of stable sleep, we tested the hypothesis that that slow-wave EEG power would show a relatively fixed-time relationship to periods of high-frequency CPC. Furthermore, we speculated that this correlation would be independent of conventional nonrapid eye movement (NREM) sleep stages. We analyzed selected datasets from an archived polysomnography (PSG) database, the Sleep Heart Health Study I (SHHS-I). We employed the cross-correlation technique to measure the degree of which 2 signals are correlated as a function of a time lag between them. Correlation analyses between high-frequency CPC and delta power (computed both as absolute and normalized values) from 3150 subjects with an apnea-hypopnea index (AHI) of ≤5 events per hour of sleep were performed. The overall correlation (r) between delta power and high-frequency coupling (HFC) power was 0.40±0.18 (P=.001). Normalized delta power provided improved correlation relative to absolute delta power. Correlations were somewhat reduced in the second half relative to the first half of the night (r=0.45±0.20 vs r=0.34±0.23). Correlations were only affected by age in the eighth decade. There were no sex differences and only small racial or ethnic differences were noted. These results support a tight temporal relationship between slow wave power, both within and outside conventional slow wave sleep periods, and high frequency cardiopulmonary coupling, an ECG-derived biomarker of "stable" sleep. These findings raise mechanistic questions regarding the cross-system integration of neural and cardiopulmonary control during sleep. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Optical image encryption via high-quality computational ghost imaging using iterative phase retrieval

    NASA Astrophysics Data System (ADS)

    Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand

    2018-07-01

    A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.

  17. Outcome assessment via handheld computer in community mental health: consumer satisfaction and reliability.

    PubMed

    Goldstein, Lizabeth A; Connolly Gibbons, Mary Beth; Thompson, Sarah M; Scott, Kelli; Heintz, Laura; Green, Patricia; Thompson, Donald; Crits-Christoph, Paul

    2011-07-01

    Computerized administration of mental health-related questionnaires has become relatively common, but little research has explored this mode of assessment in "real-world" settings. In the current study, 200 consumers at a community mental health center completed the BASIS-24 via handheld computer as well as paper and pen. Scores on the computerized BASIS-24 were compared with scores on the paper BASIS-24. Consumers also completed a questionnaire which assessed their level of satisfaction with the computerized BASIS-24. Results indicated that the BASIS-24 administered via handheld computer was highly correlated with pen and paper administration of the measure and was generally acceptable to consumers. Administration of the BASIS-24 via handheld computer may allow for efficient and sustainable outcomes assessment, adaptable research infrastructure, and maximization of clinical impact in community mental health agencies.

  18. Variable Selection through Correlation Sifting

    NASA Astrophysics Data System (ADS)

    Huang, Jim C.; Jojic, Nebojsa

    Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.

  19. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: I. Principles and some general algorithms.

    PubMed

    Langenbucher, Frieder

    2002-01-01

    Most computations in the field of in vitro/in vivo correlations can be handled directly by Excel worksheets, without the need for specialized software. Following a summary of Excel features, applications are illustrated for numerical computation of AUC and Mean, Wagner-Nelson and Loo-Riegelman absorption plots, and polyexponential curve fitting.

  20. [The comparison of the expansion of polyps according to the Ki-67 and computed tomography scores].

    PubMed

    Aydin, Sedat; Sanli, Arif; Tezer, Ilter; Hardal, Umit; Barişik, Nagehan Ozdemir

    2009-01-01

    The disease extention in nasal polyps was compared by using the mitotic activity rates and the computed tomography scores. This study was conducted on 19 nasal polyposis patients (8 males, 11 females; mean age 40.0+/-13.7 years; range 20 to 63 years). The preoperative computed tomography records of the patients were evaluated according to the Lund-Mackay grading system. The polyp tissues of the same patients were stained with the Ki-67 antigen for immunohistochemical evaluation. The correlation between the radiologic results and the Ki-67 values was compared by means of the Spearman's correlation test. The mean computed tomography score was observed as 14.3+/-4.7 (range 7-24). The mean Ki-67 score resulting from the immunohistochemical staining was calculated as 24.3+/-18.5 (range 3.3-73.5%). A significant correlation was determined between the Ki-67 values and the computed tomography scores. ("Spearman's" correlation factor: 0.677; p<0.001). As the mitotic activity rate of nasal polyps increases, both the volume of the polyps and the computed tomography scores increase as a result of the blockage of the sinus ostiums by the increased polyp volume.

  1. Thermodynamic equilibrium-air correlations for flowfield applications

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.

    1981-01-01

    Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.

  2. Strong correlation in incremental full configuration interaction

    NASA Astrophysics Data System (ADS)

    Zimmerman, Paul M.

    2017-06-01

    Incremental Full Configuration Interaction (iFCI) reaches high accuracy electronic energies via a many-body expansion of the correlation energy. In this work, the Perfect Pairing (PP) ansatz replaces the Hartree-Fock reference of the original iFCI method. This substitution captures a large amount of correlation at zero-order, which allows iFCI to recover the remaining correlation energy with low-order increments. The resulting approach, PP-iFCI, is size consistent, size extensive, and systematically improvable with increasing order of incremental expansion. Tests on multiple single bond, multiple double bond, and triple bond dissociations of main group polyatomics using double and triple zeta basis sets demonstrate the power of the method for handling strong correlation. The smooth dissociation profiles that result from PP-iFCI show that FCI-quality ground state computations are now within reach for systems with up to about 10 heavy atoms.

  3. Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations

    DTIC Science & Technology

    2010-11-01

    from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property

  4. Fragmentation of Thinking Structure's Students to Solving the Problem of Application Definite Integral in Area

    ERIC Educational Resources Information Center

    Wibawa, Kadek Adi; Nusantara, Toto; Subanji; Parta, I. Nengah

    2017-01-01

    This study aims to reveal the fragmentation of thinking structure's students in solving the problems of application definite integral in area. Fragmentation is a term on the computer (storage) that is highly relevant correlated with theoretical constructions that occur in the human brain (memory). Almost every student has a different way to…

  5. The density-magnetic field relation in the atomic ISM

    NASA Astrophysics Data System (ADS)

    Gazol, A.; Villagran, M. A.

    2018-07-01

    We present numerical experiments aimed to study the correlation between the magnetic field strength, B, and the density, n, in the cold atomic interstellar medium (CNM). We analyse 24 magnetohydrodynamic models with different initial magnetic field intensities (B0 = 0.4, 2.1, 4.2, and 8.3 μG) and/or mean densities (2, 3, and 4 cm-3), in the presence of driven and decaying turbulence, with and without self-gravity, in a cubic computational domain with 100 pc by side. Our main findings are as follows: (i) For forced simulations that reproduce the main observed physical conditions of the CNM in the solar neighbourhood, a positive correlation between B and n develops for all the B0 values. (ii) The density at which this correlation becomes significant (≲30 cm-3) depends on B0 but is not sensitive to the presence of self-gravity. (iii) The effect of self-gravity, when noticeable, consists of producing a shallower correlation at high densities, suggesting that, in the studied regime, self-gravity induces motions along the field lines. (iv) Self-gravitating decaying models where the CNM is subsonic and sub-Alfvénic with β ≲ 1 develop a high-density positive correlation whose slopes are consistent with a constant β(n). (v) Decaying models where the low-density CNM is subsonic and sub-Alfvénic with β > 1 show a negative correlation at intermediate densities, followed by a high-density positive correlation.

  6. The Density-Magnetic Field Relation in the Atomic ISM

    NASA Astrophysics Data System (ADS)

    Gazol, A.; Villagran, M. A.

    2018-04-01

    We present numerical experiments aimed to study the correlation between the magnetic field strength, B, and the density, n, in the cold atomic interstellar medium (CNM). We analyze 24 magneto-hydrodynamic models with different initial magnetic field intensities (B0 =0.4, 2.1, 4.2, and 8.3 μG) and/or mean densities (2, 3, and 4 cm-3), in the presence of driven and decaying turbulence, with and without self-gravity, in a cubic computational domain with 100 pc by side. Our main findings are: i) For forced simulations, which reproduce the main observed physical conditions of the CNM in the Solar neighborhood, a positive correlation between B and n develops for all the B0 values. ii) The density at which this correlation becomes significant (≲ 30 cm-3) depends on B0 but is not sensitive to the presence of self-gravity. iii) The effect of self-gravity, when noticeable, consists of producing a shallower correlation at high densities, suggesting that, in the studied regime, self-gravity induces motions along the field lines. iv) Self-gravitating decaying models where the CNM is subsonic and sub-Alfvénic with β ≲ 1 develop a high density positive correlation whose slopes are consistent with a constant β(n). v) Decaying models where the low density CNM is subsonic and sub-Alfvénic with β > 1 show a negative correlation at intermediate densities, followed by a high density positive correlation.

  7. Evaluation of correlation between CT image features and ERCC1 protein expression in assessing lung cancer prognosis

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Emaminejad, Nastaran; Qian, Wei; Sun, Shenshen; Kang, Yan; Guan, Yubao; Lure, Fleming; Zheng, Bin

    2014-03-01

    Stage I non-small-cell lung cancers (NSCLC) usually have favorable prognosis. However, high percentage of NSCLC patients have cancer relapse after surgery. Accurately predicting cancer prognosis is important to optimally treat and manage the patients to minimize the risk of cancer relapse. Studies have shown that an excision repair crosscomplementing 1 (ERCC1) gene was a potentially useful genetic biomarker to predict prognosis of NSCLC patients. Meanwhile, studies also found that chronic obstructive pulmonary disease (COPD) was highly associated with lung cancer prognosis. In this study, we investigated and evaluated the correlations between COPD image features and ERCC1 gene expression. A database involving 106 NSCLC patients was used. Each patient had a thoracic CT examination and ERCC1 genetic test. We applied a computer-aided detection scheme to segment and quantify COPD image features. A logistic regression method and a multilayer perceptron network were applied to analyze the correlation between the computed COPD image features and ERCC1 protein expression. A multilayer perceptron network (MPN) was also developed to test performance of using COPD-related image features to predict ERCC1 protein expression. A nine feature based logistic regression analysis showed the average COPD feature values in the low and high ERCC1 protein expression groups are significantly different (p < 0.01). Using a five-fold cross validation method, the MPN yielded an area under ROC curve (AUC = 0.669±0.053) in classifying between the low and high ERCC1 expression cases. The study indicates that CT phenotype features are associated with the genetic tests, which may provide supplementary information to help improve accuracy in assessing prognosis of NSCLC patients.

  8. Functional brain networks associated with eating behaviors in obesity.

    PubMed

    Park, Bo-Yong; Seo, Jongbum; Park, Hyunjin

    2016-03-31

    Obesity causes critical health problems including diabetes and hypertension that affect billions of people worldwide. Obesity and eating behaviors are believed to be closely linked but their relationship through brain networks has not been fully explored. We identified functional brain networks associated with obesity and examined how the networks were related to eating behaviors. Resting state functional magnetic resonance imaging (MRI) scans were obtained for 82 participants. Data were from an equal number of people of healthy weight (HW) and non-healthy weight (non-HW). Connectivity matrices were computed with spatial maps derived using a group independent component analysis approach. Brain networks and associated connectivity parameters with significant group-wise differences were identified and correlated with scores on a three-factor eating questionnaire (TFEQ) describing restraint, disinhibition, and hunger eating behaviors. Frontoparietal and cerebellum networks showed group-wise differences between HW and non-HW groups. Frontoparietal network showed a high correlation with TFEQ disinhibition scores. Both frontoparietal and cerebellum networks showed a high correlation with body mass index (BMI) scores. Brain networks with significant group-wise differences between HW and non-HW groups were identified. Parts of the identified networks showed a high correlation with eating behavior scores.

  9. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  10. Multidetector-row computed tomography enterographic assessment of the ileal-anal pouch: descriptive radiologic analysis with endoscopic and pathologic correlation.

    PubMed

    Liszewski, Mark C; Sahni, V Anik; Shyn, Paul B; Friedman, Sonia; Hornick, Jason L; Erturk, Sukru M; Mortele, Koenraad J

    2012-01-01

    To describe the multidetector-row computed tomography enterographic (MD-CTE) features of the ileal-anal pouch after ileal pouch anal anastomosis (IPAA) surgery and correlate them with pouch endoscopy and histopathologic findings. All MD-CTE examinations performed on patients who underwent IPAA from July 1, 2005 to December 1, 2010 (n = 35; 16 [45.7%] men; mean age, 37.7 years; age range, 22-72 years) were retrospectively evaluated in consensus by 2 radiologists. All studies were evaluated for the presence of multiple imaging features. Two radiographic scores were then calculated: a total radiographic score and a radiographic active inflammation score. In patients who underwent MD-CTE, pouch endoscopy, and biopsy within 30 days (n = 13), both scores were correlated with findings on pouch endoscopy and histopathology. Of the 35 patients, 33 (94%) had at least one MD-CTE finding of active or chronic pouch inflammation and 27 patients (77%) had at least one MD-CTE finding of active pouch inflammation. Of the 13 patients who underwent endoscopy and biopsy, the total radiographic score demonstrated a strong positive correlation with endoscopic score (r = 0.81; P = 0.001) and a moderate positive correlation with histopathologic score (r = 0.56; P = 0.047). The radiographic active inflammation score demonstrated a strong positive correlation with endoscopic score (r = 0.83; P = 0.0004), but only a weak nonsignificant positive correlation with histopathologic score (r = 0.492, P = 0.087). In patients who had IPAA surgery, findings on MD-CTE correlate positively with findings on pouch endoscopy and histopathology and are sensitive measures for pouch inflammation with high positive predictive value. Thus, MD-CTE can be a useful noninvasive test in the early evaluation of symptomatic patients.

  11. Computational Nuclear Physics and Post Hartree-Fock Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.

    We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less

  12. Accurate virial coefficients of gaseous krypton from state-of-the-art ab initio potential and polarizability of the krypton dimer

    NASA Astrophysics Data System (ADS)

    Song, Bo; Waldrop, Jonathan M.; Wang, Xiaopo; Patkowski, Konrad

    2018-01-01

    We have developed a new krypton-krypton interaction-induced isotropic dipole polarizability curve based on high-level ab initio methods. The determination was carried out using the coupled-cluster singles and doubles plus perturbative triples method with very large basis sets up to augmented correlation-consistent sextuple zeta as well as the corrections for core-core and core-valence correlation and relativistic effects. The analytical function of polarizability and our recently constructed reference interatomic potential [J. M. Waldrop et al., J. Chem. Phys. 142, 204307 (2015)] were used to predict the thermophysical and electromagnetic properties of krypton gas. The second pressure, acoustic, and dielectric virial coefficients were computed for the temperature range of 116 K-5000 K using classical statistical mechanics supplemented with high-order quantum corrections. The virial coefficients calculated were compared with the generally less precise available experimental data as well as with values computed from other potentials in the literature {in particular, the recent highly accurate potential of Jäger et al. [J. Chem. Phys. 144, 114304 (2016)]}. The detailed examination in this work suggests that the present theoretical prediction can be applied as reference values in disciplines involving thermophysical and electromagnetic properties of krypton gas.

  13. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.

    1996-12-17

    A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.

  14. Factors influencing medical informatics examination grade--can biorhythm, astrological sign, seasonal aspect, or bad statistics predict outcome?

    PubMed

    Petrovecki, Mladen; Rahelić, Dario; Bilić-Zulle, Lidija; Jelec, Vjekoslav

    2003-02-01

    To investigate whether and to what extent various parameters, such as individual characteristics, computer habits, situational factors, and pseudoscientific variables, influence Medical Informatics examination grade, and how inadequate statistical analysis can lead to wrong conclusions. The study included a total of 382 second-year undergraduate students at the Rijeka University School of Medicine in the period from 1996/97 to 2000/01 academic year. After passing the Medical Informatics exam, students filled out an anonymous questionnaire about their attitude toward learning medical informatics. They were asked to grade the course organization and curriculum content, and provide their date of birth; sex; study year; high school grades; Medical Informatics examination grade, type, and term; and describe their computer habits. From these data, we determined their zodiac signs and biorhythm. Data were compared by the use of t-test, one-way ANOVA with Tukey's honest significance difference test, and randomized complete block design ANOVA. Out of 21 variables analyzed, only 10 correlated with the average grade. Students taking Medical Informatics examination in the 1998/99 academic year earned lower average grade than any other generation. Significantly higher Medical Informatics exam grade was earned by students who finished a grammar high school; owned and regularly used a computer, Internet, and e-mail (p< or =0.002 for all items); passed an oral exam without taking a written test (p=0.004), or did not repeat the exam (p<0.001). Better high-school students and students with better grades from high-school informatics course also scored significantly better (p=0.032 and p<0.001, respectively). Grade in high-school mathematics, student's sex, and time of year when the examination was taken were not related to the grade, and neither were pseudoscientific parameters, such as student zodiac sign, zodiac sign quality, or biorhythm cycles, except when intentionally inadequate statistics was used for data analysis. Medical Informatics examination grades correlated with general learning capacity and computer habits of students, but showed no relation to other investigated parameters, such as examination term or pseudoscientific parameters. Inadequate statistical analysis can always confirm false conclusions.

  15. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  16. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    PubMed

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  17. Role of Wind Tunnels and Computer Codes in the Certification and Qualification of Rotorcraft for Flight in Forecast Icing

    NASA Technical Reports Server (NTRS)

    Flemming, Robert J.; Britton, Randall K.; Bond, Thomas H.

    1994-01-01

    The cost and time to certify or qualify a rotorcraft for flight in forecast icing has been a major impediment to the development of ice protection systems for helicopter rotors. Development and flight test programs for those aircraft that have achieved certification or qualification for flight in icing conditions have taken many years, and the costs have been very high. NASA, Sikorsky, and others have been conducting research into alternative means for providing information for the development of ice protection systems, and subsequent flight testing to substantiate the air-worthiness of a rotor ice protection system. Model rotor icing tests conducted in 1989 and 1993 have provided a data base for correlation of codes, and for the validation of wind tunnel icing test techniques. This paper summarizes this research, showing test and correlation trends as functions of cloud liquid water content, rotor lift, flight speed, and ambient temperature. Molds were made of several of the ice formations on the rotor blades. These molds were used to form simulated ice on the rotor blades, and the blades were then tested in a wind tunnel to determine flight performance characteristics. These simulated-ice rotor performance tests are discussed in the paper. The levels of correlation achieved and the role of these tools (codes and wind tunnel tests) in flight test planning, testing, and extension of flight data to the limits of the icing envelope are discussed. The potential application of simulated ice, the NASA LEWICE computer, the Sikorsky Generalized Rotor Performance aerodynamic computer code, and NASA Icing Research Tunnel rotor tests in a rotorcraft certification or qualification program are also discussed. The correlation of these computer codes with tunnel test data is presented, and a procedure or process to use these methods as part of a certification or qualification program is introduced.

  18. Impact of IQ, computer-gaming skills, general dexterity, and laparoscopic experience on performance with the da Vinci surgical system.

    PubMed

    Hagen, Monika E; Wagner, Oliver J; Inan, Ihsan; Morel, Philippe

    2009-09-01

    Due to improved ergonomics and dexterity, robotic surgery is promoted as being easily performed by surgeons with no special skills necessary. We tested this hypothesis by measuring IQ elements, computer gaming skills, general dexterity with chopsticks, and evaluating laparoscopic experience in correlation to performance ability with the da Vinci robot. Thirty-four individuals were tested for robotic dexterity, IQ elements, computer-gaming skills and general dexterity. Eighteen surgically inexperienced and 16 laparoscopically trained surgeons were included. Each individual performed three different tasks with the da Vinci surgical system and their times were recorded. An IQ test (elements: logical thinking, 3D imagination and technical understanding) was completed by each participant. Computer skills were tested with a simple computer game (hand-eye coordination) and general dexterity was evaluated by the ability to use chopsticks. We found no correlation between logical thinking, 3D imagination and robotic skills. Both computer gaming and general dexterity showed a slight but non-significant improvement in performance with the da Vinci robot (p > 0.05). A significant correlation between robotic skills, technical understanding and laparoscopic experience was observed (p < 0.05). The data support the conclusion that there are no significant correlations between robotic performance and logical thinking, 3D understanding, computer gaming skills and general dexterity. A correlation between robotic skills and technical understanding may exist. Laparoscopic experience seems to be the strongest predictor of performance with the da Vinci surgical system. Generally, it appears difficult to determine non-surgical predictors for robotic surgery.

  19. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still twomore » important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.« less

  20. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  1. The relationship between venture capital investment and macro economic variables via statistical computation method

    NASA Astrophysics Data System (ADS)

    Aygunes, Gunes

    2017-07-01

    The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.

  2. Multicenter study of quantitative computed tomography analysis using a computer-aided three-dimensional system in patients with idiopathic pulmonary fibrosis.

    PubMed

    Iwasawa, Tae; Kanauchi, Tetsu; Hoshi, Toshiko; Ogura, Takashi; Baba, Tomohisa; Gotoh, Toshiyuki; Oba, Mari S

    2016-01-01

    To evaluate the feasibility of automated quantitative analysis with a three-dimensional (3D) computer-aided system (i.e., Gaussian histogram normalized correlation, GHNC) of computed tomography (CT) images from different scanners. Each institution's review board approved the research protocol. Informed patient consent was not required. The participants in this multicenter prospective study were 80 patients (65 men, 15 women) with idiopathic pulmonary fibrosis. Their mean age was 70.6 years. Computed tomography (CT) images were obtained by four different scanners set at different exposures. We measured the extent of fibrosis using GHNC, and used Pearson's correlation analysis, Bland-Altman plots, and kappa analysis to directly compare the GHNC results with manual scoring by radiologists. Multiple linear regression analysis was performed to determine the association between the CT data and forced vital capacity (FVC). For each scanner, the extent of fibrosis as determined by GHNC was significantly correlated with the radiologists' score. In multivariate analysis, the extent of fibrosis as determined by GHNC was significantly correlated with FVC (p < 0.001). There was no significant difference between the results obtained using different CT scanners. Gaussian histogram normalized correlation was feasible, irrespective of the type of CT scanner used.

  3. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  4. Statistical analysis of atmospheric turbulence about a simulated block building

    NASA Technical Reports Server (NTRS)

    Steely, S. L., Jr.

    1981-01-01

    An array of towers instrumented to measure the three components of wind speed was used to study atmospheric flow about a simulated block building. Two-point spacetime correlations of the longitudinal velocity component were computed along with two-point spatial correlations. These correlations are in good agreement with fundamental concepts of fluid mechanics. The two-point spatial correlations computed directly were compared with correlations predicted by Taylor's hypothesis and excellent agreement was obtained at the higher levels which were out of the building influence. The correlations fall off significantly in the building wake but recover beyond the wake to essentially the same values in the undisturbed, higher regions.

  5. Accelerating and focusing protein-protein docking correlations using multi-dimensional rotational FFT generating functions.

    PubMed

    Ritchie, David W; Kozakov, Dima; Vajda, Sandor

    2008-09-01

    Predicting how proteins interact at the molecular level is a computationally intensive task. Many protein docking algorithms begin by using fast Fourier transform (FFT) correlation techniques to find putative rigid body docking orientations. Most such approaches use 3D Cartesian grids and are therefore limited to computing three dimensional (3D) translational correlations. However, translational FFTs can speed up the calculation in only three of the six rigid body degrees of freedom, and they cannot easily incorporate prior knowledge about a complex to focus and hence further accelerate the calculation. Furthemore, several groups have developed multi-term interaction potentials and others use multi-copy approaches to simulate protein flexibility, which both add to the computational cost of FFT-based docking algorithms. Hence there is a need to develop more powerful and more versatile FFT docking techniques. This article presents a closed-form 6D spherical polar Fourier correlation expression from which arbitrary multi-dimensional multi-property multi-resolution FFT correlations may be generated. The approach is demonstrated by calculating 1D, 3D and 5D rotational correlations of 3D shape and electrostatic expansions up to polynomial order L=30 on a 2 GB personal computer. As expected, 3D correlations are found to be considerably faster than 1D correlations but, surprisingly, 5D correlations are often slower than 3D correlations. Nonetheless, we show that 5D correlations will be advantageous when calculating multi-term knowledge-based interaction potentials. When docking the 84 complexes of the Protein Docking Benchmark, blind 3D shape plus electrostatic correlations take around 30 minutes on a contemporary personal computer and find acceptable solutions within the top 20 in 16 cases. Applying a simple angular constraint to focus the calculation around the receptor binding site produces acceptable solutions within the top 20 in 28 cases. Further constraining the search to the ligand binding site gives up to 48 solutions within the top 20, with calculation times of just a few minutes per complex. Hence the approach described provides a practical and fast tool for rigid body protein-protein docking, especially when prior knowledge about one or both binding sites is available.

  6. Constructing Nucleon Operators on a Lattice for Form Factors with High Momentum Transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syritsyn, Sergey; Gambhir, Arjun S.; Musch, Bernhard U.

    We present preliminary results of computing nucleon form factor at high momentum transfer using the 'boosted' or 'momentum' smearing. We use gauge configurations generated with N f = 2 + 1dynamical Wilson-clover fermions and study the connected as well as disconnected contributions to the nucleon form factors. Our initial results indicate that boosted smearing helps to improve the signal for nucleon correlators at high momentum. However, we also find evidence for large excited state contributions, which will likely require variational analysis to isolate the boosted nucleon ground state.

  7. ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.

    PubMed

    Kim, Seongho

    2015-11-01

    Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.

  8. Relaxation method of compensation in an optical correlator

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Daiuto, Brian J.

    1987-01-01

    An iterative method is proposed for the sharpening of programmable filters in a 4-f optical correlator. Continuously variable spatial light modulators (SLMs) permit the fine adjustment of optical processing filters so as to compensate for the departures from ideal behavior of a real optical system. Although motivated by the development of continuously variable phase-only SLMs, the proposed sharpening method is also applicable to amplitude modulators and, with appropriate adjustments, to binary modulators as well. A computer simulation is presented that illustrates the potential effectiveness of the method: an image is placed on the input to the correlator, and its corresponding phase-only filter is adjusted (allowed to relax) so as to produce a progressively brighter and more centralized peak in the correlation plane. The technique is highly robust against the form of the system's departure from ideal behavior.

  9. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  10. Relationship of the interplanetary electric field to the high-latitude ionospheric electric field and currents Observations and model simulation

    NASA Technical Reports Server (NTRS)

    Clauer, C. R.; Banks, P. M.

    1986-01-01

    The electrical coupling between the solar wind, magnetosphere, and ionosphere is studied. The coupling is analyzed using observations of high-latitude ion convection measured by the Sondre Stromfjord radar in Greenland and a computer simulation. The computer simulation calculates the ionospheric electric potential distribution for a given configuration of field-aligned currents and conductivity distribution. The technique for measuring F-region in velocities at high time resolution over a large range of latitudes is described. Variations in the currents on ionospheric plasma convection are examined using a model of field-aligned currents linking the solar wind with the dayside, high-latitude ionosphere. The data reveal that high-latitude ionospheric convection patterns, electric fields, and field-aligned currents are dependent on IMF orientation; it is observed that the electric field, which drives the F-region plasma curve, responds within about 14 minutes to IMF variations in the magnetopause. Comparisons of the simulated plasma convection with the ion velocity measurements reveal good correlation between the data.

  11. Multiscale tomographic analysis of heterogeneous cast Al-Si-X alloys.

    PubMed

    Asghar, Z; Requena, G; Sket, F

    2015-07-01

    The three-dimensional microstructure of cast AlSi12Ni and AlSi10Cu5Ni2 alloys is investigated by laboratory X-ray computed tomography, synchrotron X-ray computed microtomography, light optical tomography and synchrotron X-ray computed microtomography with submicrometre resolution. The results obtained with each technique are correlated with the size of the scanned volumes and resolved microstructural features. Laboratory X-ray computed tomography is sufficient to resolve highly absorbing aluminides but eutectic and primary Si remain unrevealed. Synchrotron X-ray computed microtomography at ID15/ESRF gives better spatial resolution and reveals primary Si in addition to aluminides. Synchrotron X-ray computed microtomography at ID19/ESRF reveals all the phases ≥ ∼1 μm in volumes about 80 times smaller than laboratory X-ray computed tomography. The volumes investigated by light optical tomography and submicrometre synchrotron X-ray computed microtomography are much smaller than laboratory X-ray computed tomography but both techniques provide local chemical information on the types of aluminides. The complementary techniques applied enable a full three-dimensional characterization of the microstructure of the alloys at length scales ranging over six orders of magnitude. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  12. Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering☆

    PubMed Central

    Rabitz, Herschel; Welsh, William J.; Kohn, Joachim; de Boer, Jan

    2016-01-01

    The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. PMID:26876875

  13. Multivariate analysis: A statistical approach for computations

    NASA Astrophysics Data System (ADS)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  14. Visual and computer software-aided estimates of Dupuytren's contractures: correlation with clinical goniometric measurements.

    PubMed

    Smith, R P; Dias, J J; Ullah, A; Bhowal, B

    2009-05-01

    Corrective surgery for Dupuytren's disease represents a significant proportion of a hand surgeon's workload. The decision to go ahead with surgery and the success of surgery requires measuring the degree of contracture of the diseased finger(s). This is performed in clinic with a goniometer, pre- and postoperatively. Monitoring the recurrence of the contracture can inform on surgical outcome, research and audit. We compared visual and computer software-aided estimation of Dupuytren's contractures to clinical goniometric measurements in 60 patients with Dupuytren's disease. Patients' hands were digitally photographed. There were 76 contracted finger joints--70 proximal interphalangeal joints and six distal interphalangeal joints. The degrees of contracture of these images were visually assessed by six orthopaedic staff of differing seniority and re-assessed with computer software. Across assessors, the Pearson correlation between the goniometric measurements and the visual estimations was 0.83 and this significantly improved to 0.88 with computer software. Reliability with intra-class correlations achieved 0.78 and 0.92 for the visual and computer-aided estimations, respectively, and with test-retest analysis, 0.92 for visual estimation and 0.95 for computer-aided measurements. Visual estimations of Dupuytren's contractures correlate well with actual clinical goniometric measurements and improve further if measured with computer software. Digital images permit monitoring of contracture after surgery and may facilitate research into disease progression and auditing of surgical technique.

  15. A novel frequency analysis method for assessing K(ir)2.1 and Na (v)1.5 currents.

    PubMed

    Rigby, J R; Poelzing, S

    2012-04-01

    Voltage clamping is an important tool for measuring individual currents from an electrically active cell. However, it is difficult to isolate individual currents without pharmacological or voltage inhibition. Herein, we present a technique that involves inserting a noise function into a standard voltage step protocol, which allows one to characterize the unique frequency response of an ion channel at different step potentials. Specifically, we compute the fast Fourier transform for a family of current traces at different step potentials for the inward rectifying potassium channel, K(ir)2.1, and the channel encoding the cardiac fast sodium current, Na(v)1.5. Each individual frequency magnitude, as a function of voltage step, is correlated to the peak current produced by each channel. The correlation coefficient vs. frequency relationship reveals that these two channels are associated with some unique frequencies with high absolute correlation. The individual IV relationship can then be recreated using only the unique frequencies with magnitudes of high absolute correlation. Thus, this study demonstrates that ion channels may exhibit unique frequency responses.

  16. Computer Experiences, Self-Efficacy and Knowledge of Students Enrolled in Introductory University Agriculture Courses.

    ERIC Educational Resources Information Center

    Johnson, Donald M.; Ferguson, James A.; Lester, Melissa L.

    1999-01-01

    Of 175 freshmen agriculture students, 74% had prior computer courses, 62% owned computers. The number of computer topics studied predicted both computer self-efficacy and computer knowledge. A substantial positive correlation was found between self-efficacy and computer knowledge. (SK)

  17. Effect of correlated decay on fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Lemberger, B.; Yavuz, D. D.

    2017-12-01

    We analyze noise in the circuit model of quantum computers when the qubits are coupled to a common bosonic bath and discuss the possible failure of scalability of quantum computation. Specifically, we investigate correlated (super-radiant) decay between the qubit energy levels from a two- or three-dimensional array of qubits without imposing any restrictions on the size of the sample. We first show that regardless of how the spacing between the qubits compares with the emission wavelength, correlated decay produces errors outside the applicability of the threshold theorem. This is because the sum of the norms of the two-body interaction Hamiltonians (which can be viewed as the upper bound on the single-qubit error) that decoheres each qubit scales with the total number of qubits and is unbounded. We then discuss two related results: (1) We show that the actual error (instead of the upper bound) on each qubit scales with the number of qubits. As a result, in the limit of large number of qubits in the computer, N →∞ , correlated decay causes each qubit in the computer to decohere in ever shorter time scales. (2) We find the complete eigenvalue spectrum of the exchange Hamiltonian that causes correlated decay in the same limit. We show that the spread of the eigenvalue distribution grows faster with N compared to the spectrum of the unperturbed system Hamiltonian. As a result, as N →∞ , quantum evolution becomes completely dominated by the noise due to correlated decay. These results argue that scalable quantum computing may not be possible in the circuit model in a two- or three- dimensional geometry when the qubits are coupled to a common bosonic bath.

  18. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  19. The Relation between Acquisition of a Theory of Mind and the Capacity to Hold in Mind.

    ERIC Educational Resources Information Center

    Gordon, Anne C. L.; Olson, David R.

    1998-01-01

    Tested hypothesized relationship between development of a theory of mind and increasing computational resources in 3- to 5-year olds. Found that the correlations between performance on theory of mind tasks and dual processing tasks were as high as r=.64, suggesting that changes in working memory capacity allow the expression of, and arguably the…

  20. Age Changes in Attention Control: Assessing the Role of Stimulus Contingencies

    ERIC Educational Resources Information Center

    Brodeur, Darlene A.

    2004-01-01

    Children (ages 5, 7, and 9 years) and young adults completed two visual attention tasks that required them to make a forced choice identification response to a target shape presented in the center of a computer screen. In the first task (high correlation condition) each target was flanked with the same distracters on 80% of the trials (valid…

  1. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    NASA Technical Reports Server (NTRS)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  2. Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface

    PubMed Central

    Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.

    2014-01-01

    Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079

  3. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  4. The National Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.

    2001-06-01

    The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .

  5. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  6. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  7. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    PubMed

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-12-24

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.

  8. [The application of the computer technologies for the mathematical simulation of the ethmoidal labyrinth].

    PubMed

    Markeeva, M V; Mareev, O V; Nikolenko, V N; Mareev, G O; Danilova, T V; Fadeeva, E A; Fedorov, R V

    The objective of the present work was to study the relationship between the dimensions of the ethmoidal labyrinth and the skull in the subjects differing in the nose shape by means of the factorial and correlation analysis with the application of the modern computer-assisted methods for the three-dimensional reconstruction of the skull. We developed an original method for computed craniometry with the use the original program that made it possible to determine the standard intravital craniometrics characteristics of the human skull with a high degree of accuracy based on the results of analysis of 200 computed tomograms of the head. It was shown that the length of the inferior turbinated bones and the posterior edge of the orbital plate is of special relevance for practically all parameters of the ethmoidal labyrinth. Also, the width of the choanae positively relates to the height of the ethmoidal labyrinth.

  9. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    NASA Technical Reports Server (NTRS)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  10. Computational Trials: Unraveling Motility Phenotypes, Progression Patterns, and Treatment Options for Glioblastoma Multiforme

    PubMed Central

    Raman, Fabio; Scribner, Elizabeth; Saut, Olivier; Wenger, Cornelia; Colin, Thierry; Fathallah-Shaykh, Hassan M.

    2016-01-01

    Glioblastoma multiforme is a malignant brain tumor with poor prognosis and high morbidity due to its invasiveness. Hypoxia-driven motility and concentration-driven motility are two mechanisms of glioblastoma multiforme invasion in the brain. The use of anti-angiogenic drugs has uncovered new progression patterns of glioblastoma multiforme associated with significant differences in overall survival. Here, we apply a mathematical model of glioblastoma multiforme growth and invasion in humans and design computational trials using agents that target angiogenesis, tumor replication rates, or motility. The findings link highly-dispersive, moderately-dispersive, and hypoxia-driven tumors to the patterns observed in glioblastoma multiforme treated by anti-angiogenesis, consisting of progression by Expanding FLAIR, Expanding FLAIR + Necrosis, and Expanding Necrosis, respectively. Furthermore, replication rate-reducing strategies (e.g. Tumor Treating Fields) appear to be effective in highly-dispersive and moderately-dispersive tumors but not in hypoxia-driven tumors. The latter may respond to motility-reducing agents. In a population computational trial, with all three phenotypes, a correlation was observed between the efficacy of the rate-reducing agent and the prolongation of overall survival times. This research highlights the potential applications of computational trials and supports new hypotheses on glioblastoma multiforme phenotypes and treatment options. PMID:26756205

  11. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  12. Discovering Coherent Structures Using Local Causal States

    NASA Astrophysics Data System (ADS)

    Rupe, Adam; Crutchfield, James P.; Kashinath, Karthik; Prabhat, Mr.

    2017-11-01

    Coherent structures were introduced in the study of fluid dynamics and were initially defined as regions characterized by high levels of coherent vorticity, i.e. regions where instantaneously space and phase correlated vorticity are high. In a more general spatiotemporal setting, coherent structures can be seen as localized broken symmetries which persist in time. Building off the computational mechanics framework, which integrates tools from computation and information theory to capture pattern and structure in nonlinear dynamical systems, we introduce a theory of coherent structures, in the more general sense. Central to computational mechanics is the causal equivalence relation, and a local spatiotemporal generalization of it is used to construct the local causal states, which are utilized to uncover a system's spatiotemporal symmetries. Coherent structures are then identified as persistent, localized deviations from these symmetries. We illustrate how novel patterns and structures can be discovered in cellular automata and outline the path from them to laminar, transitional and turbulent flows. Funded by Intel through the Big Data Center at LBNL and the IPCC at UC Davis.

  13. Computational Thermochemistry of Jet Fuels and Rocket Propellants

    NASA Technical Reports Server (NTRS)

    Crawford, T. Daniel

    2002-01-01

    The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.

  14. Gapped two-body Hamiltonian for continuous-variable quantum computation.

    PubMed

    Aolita, Leandro; Roncaglia, Augusto J; Ferraro, Alessandro; Acín, Antonio

    2011-03-04

    We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law.

  15. Correlation-based pattern recognition for implantable defibrillators.

    PubMed Central

    Wilkins, J.

    1996-01-01

    An estimated 300,000 Americans die each year from cardiac arrhythmias. Historically, drug therapy or surgery were the only treatment options available for patients suffering from arrhythmias. Recently, implantable arrhythmia management devices have been developed. These devices allow abnormal cardiac rhythms to be sensed and corrected in vivo. Proper arrhythmia classification is critical to selecting the appropriate therapeutic intervention. The classification problem is made more challenging by the power/computation constraints imposed by the short battery life of implantable devices. Current devices utilize heart rate-based classification algorithms. Although easy to implement, rate-based approaches have unacceptably high error rates in distinguishing supraventricular tachycardia (SVT) from ventricular tachycardia (VT). Conventional morphology assessment techniques used in ECG analysis often require too much computation to be practical for implantable devices. In this paper, a computationally-efficient, arrhythmia classification architecture using correlation-based morphology assessment is presented. The architecture classifies individuals heart beats by assessing similarity between an incoming cardiac signal vector and a series of prestored class templates. A series of these beat classifications are used to make an overall rhythm assessment. The system makes use of several new results in the field of pattern recognition. The resulting system achieved excellent accuracy in discriminating SVT and VT. PMID:8947674

  16. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  17. Investigations in space-related molecular biology. [cryo-electron microscopic and diffraction studies on terrestrial and extraterrestrial specimens

    NASA Technical Reports Server (NTRS)

    Fernandez-Moran, H.; Pritzker, A. N.

    1974-01-01

    Improved instrumentation and preparation techniques for high resolution, high voltage cryo-electron microscopic and diffraction studies on terrestrial and extraterrestrial specimens are reported. Computer correlated ultrastructural and biochemical work on hydrated and dried cell membranes and related biological systems provided information on membrane organization, ice crystal formation and ordered water, RNA virus linked to cancer, lunar rock samples, and organometallic superconducting compounds. Apollo 11, 12, 14, and 15 specimens were analyzed

  18. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  19. Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics

    DTIC Science & Technology

    2017-04-27

    true at both the item and the scale level. 6 Moreover, the correlation between conventional scores and distance scores for these types of scales...have a perfect negative correlation , r = -1.00. From this perspective, conventional and distance scores are completely redundant. Therefore, we argue... correlation between each respondent’s rating profile and the scale key: shape-scores = rx,k. 2. Rating elevation difference, which is computed as the

  20. Generating series for GUE correlators

    NASA Astrophysics Data System (ADS)

    Dubrovin, Boris; Yang, Di

    2017-11-01

    We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.

  1. Energy Finite Element Analysis for Computing the High Frequency Vibration of the Aluminum Testbed Cylinder and Correlating the Results to Test Data

    NASA Technical Reports Server (NTRS)

    Vlahopoulos, Nickolas

    2005-01-01

    The Energy Finite Element Analysis (EFEA) is a finite element based computational method for high frequency vibration and acoustic analysis. The EFEA solves with finite elements governing differential equations for energy variables. These equations are developed from wave equations. Recently, an EFEA method for computing high frequency vibration of structures either in vacuum or in contact with a dense fluid has been presented. The presence of fluid loading has been considered through added mass and radiation damping. The EFEA developments were validated by comparing EFEA results to solutions obtained by very dense conventional finite element models and solutions from classical techniques such as statistical energy analysis (SEA) and the modal decomposition method for bodies of revolution. EFEA results have also been compared favorably with test data for the vibration and the radiated noise generated by a large scale submersible vehicle. The primary variable in EFEA is defined as the time averaged over a period and space averaged over a wavelength energy density. A joint matrix computed from the power transmission coefficients is utilized for coupling the energy density variables across any discontinuities, such as change of plate thickness, plate/stiffener junctions etc. When considering the high frequency vibration of a periodically stiffened plate or cylinder, the flexural wavelength is smaller than the interval length between two periodic stiffeners, therefore the stiffener stiffness can not be smeared by computing an equivalent rigidity for the plate or cylinder. The periodic stiffeners must be regarded as coupling components between periodic units. In this paper, Periodic Structure (PS) theory is utilized for computing the coupling joint matrix and for accounting for the periodicity characteristics.

  2. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    PubMed

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  3. Computation of load performance and other parameters of extra high speed modified Lundell alternators from 3D-FE magnetic field solutions

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1992-01-01

    The combined magnetic vector potential - magnetic scalar potential method of computation of 3D magnetic fields by finite elements, introduced in a companion paper, in combination with state modeling in the abc-frame of reference, are used for global 3D magnetic field analysis and machine performance computation under rated load and overload condition in an example 14.3 kVA modified Lundell alternator. The results vividly demonstrate the 3D nature of the magnetic field in such machines, and show how this model can be used as an excellent tool for computation of flux density distributions, armature current and voltage waveform profiles and harmonic contents, as well as computation of torque profiles and ripples. Use of the model in gaining insight into locations of regions in the magnetic circuit with heavy degrees of saturation is demonstrated. Experimental results which correlate well with the simulations of the load case are given.

  4. Plasma cell quantification in bone marrow by computer-assisted image analysis.

    PubMed

    Went, P; Mayer, S; Oberholzer, M; Dirnhofer, S

    2006-09-01

    Minor and major criteria for the diagnosis of multiple meloma according to the definition of the WHO classification include different categories of the bone marrow plasma cell count: a shift from the 10-30% group to the > 30% group equals a shift from a minor to a major criterium, while the < 10% group does not contribute to the diagnosis. Plasma cell fraction in the bone marrow is therefore critical for the classification and optimal clinical management of patients with plasma cell dyscrasias. The aim of this study was (i) to establish a digital image analysis system able to quantify bone marrow plasma cells and (ii) to evaluate two quantification techniques in bone marrow trephines i.e. computer-assisted digital image analysis and conventional light-microscopic evaluation. The results were compared regarding inter-observer variation of the obtained results. Eighty-seven patients, 28 with multiple myeloma, 29 with monoclonal gammopathy of undetermined significance, and 30 with reactive plasmocytosis were included in the study. Plasma cells in H&E- and CD138-stained slides were quantified by two investigators using light-microscopic estimation and computer-assisted digital analysis. The sets of results were correlated with rank correlation coefficients. Patients were categorized according to WHO criteria addressing the plasma cell content of the bone marrow (group 1: 0-10%, group 2: 11-30%, group 3: > 30%), and the results compared by kappa statistics. The degree of agreement in CD138-stained slides was higher for results obtained using the computer-assisted image analysis system compared to light microscopic evaluation (corr.coeff. = 0.782), as was seen in the intra- (corr.coeff. = 0.960) and inter-individual results correlations (corr.coeff. = 0.899). Inter-observer agreement for categorized results (SM/PW: kappa 0.833) was in a high range. Computer-assisted image analysis demonstrated a higher reproducibility of bone marrow plasma cell quantification. This might be of critical importance for diagnosis, clinical management and prognostics when plasma cell numbers are low, which makes exact quantifications difficult.

  5. Aging adult skull remains through radiological density estimates: A comparison of different computed tomography systems and the use of computer simulations to judge the accuracy of results.

    PubMed

    Obert, Martin; Kubelt, Carolin; Schaaf, Thomas; Dassinger, Benjamin; Grams, Astrid; Gizewski, Elke R; Krombach, Gabriele A; Verhoff, Marcel A

    2013-05-10

    The objective of this article was to explore age-at-death estimates in forensic medicine, which were methodically based on age-dependent, radiologically defined bone-density (HC) decay and which were investigated with a standard clinical computed tomography (CT) system. Such density decay was formerly discovered with a high-resolution flat-panel CT in the skulls of adult females. The development of a standard CT methodology for age estimations--with thousands of installations--would have the advantage of being applicable everywhere, whereas only few flat-panel prototype CT systems are in use worldwide. A Multi-Slice CT scanner (MSCT) was used to obtain 22,773 images from 173 European human skulls (89 male, 84 female), taken from a population of patients from the Department of Neuroradiology at the University Hospital Giessen and Marburg during 2010 and 2011. An automated image analysis was carried out to evaluate HC of all images. The age dependence of HC was studied by correlation analysis. The prediction accuracy of age-at-death estimates was calculated. Computer simulations were carried out to explore the influence of noise on the accuracy of age predictions. Human skull HC values strongly scatter as a function of age for both sexes. Adult male skull bone-density remains constant during lifetime. Adult female HC decays during lifetime, as indicated by a correlation coefficient (CC) of -0.53. Prediction errors for age-at-death estimates for both of the used scanners are in the range of ±18 years at a 75% confidence interval (CI). Computer simulations indicate that this is the best that can be expected for such noisy data. Our results indicate that HC-decay is indeed present in adult females and that it can be demonstrated both by standard and by high-resolution CT methods, applied to different subject groups of an identical population. The weak correlation between HC and age found by both CT methods only enables a method to estimate age-at-death with limited practical relevance since the errors of the estimates are large. Computer simulations clearly indicate that data with less noise and CCs in the order of -0.97 or less would be necessary to enable age-at-death estimates with an accuracy of ±5 years at a 75% CI. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. A Small World of Neuronal Synchrony

    PubMed Central

    Yu, Shan; Huang, Debin; Singer, Wolf

    2008-01-01

    A small-world network has been suggested to be an efficient solution for achieving both modular and global processing—a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population's activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of “hubs” in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding. PMID:18400792

  7. Gaussian Elimination-Based Novel Canonical Correlation Analysis Method for EEG Motion Artifact Removal.

    PubMed

    Roy, Vandana; Shukla, Shailja; Shukla, Piyush Kumar; Rawat, Paresh

    2017-01-01

    The motion generated at the capturing time of electro-encephalography (EEG) signal leads to the artifacts, which may reduce the quality of obtained information. Existing artifact removal methods use canonical correlation analysis (CCA) for removing artifacts along with ensemble empirical mode decomposition (EEMD) and wavelet transform (WT). A new approach is proposed to further analyse and improve the filtering performance and reduce the filter computation time under highly noisy environment. This new approach of CCA is based on Gaussian elimination method which is used for calculating the correlation coefficients using backslash operation and is designed for EEG signal motion artifact removal. Gaussian elimination is used for solving linear equation to calculate Eigen values which reduces the computation cost of the CCA method. This novel proposed method is tested against currently available artifact removal techniques using EEMD-CCA and wavelet transform. The performance is tested on synthetic and real EEG signal data. The proposed artifact removal technique is evaluated using efficiency matrices such as del signal to noise ratio (DSNR), lambda ( λ ), root mean square error (RMSE), elapsed time, and ROC parameters. The results indicate suitablity of the proposed algorithm for use as a supplement to algorithms currently in use.

  8. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  9. Heating Augmentation for Short Hypersonic Protuberances

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Wood, William A.

    2008-01-01

    Computational aeroheating analyses of the Space Shuttle Orbiter plug repair models are validated against data collected in the Calspan University of Buffalo Research Center (CUBRC) 48 inch shock tunnel. The comparison shows that the average difference between computed heat transfer results and the data is about 9:5%. Using CFD and Wind Tunnel (WT) data, an empirical correlation for estimating heating augmentation on short hyper- sonic protuberances (k/delta < 0.33) is proposed. This proposed correlation is compared with several computed flight simulation cases and good agreement is achieved. Accordingly, this correlation is proposed for further investigation on other short hypersonic protuberances for estimating heating augmentation.

  10. Heating Augmentation for Short Hypersonic Protuberances

    NASA Technical Reports Server (NTRS)

    Mazaheri, Ali R.; Wood, William A.

    2008-01-01

    Computational aeroheating analyses of the Space Shuttle Orbiter plug repair models are validated against data collected in the Calspan University of Buffalo Research Center (CUBRC) 48 inch shock tunnel. The comparison shows that the average difference between computed heat transfer results and the data is about 9.5%. Using CFD and Wind Tunnel (WT) data, an empirical correlation for estimating heating augmentation on short hypersonic protuberances (k/delta less than 0.3) is proposed. This proposed correlation is compared with several computed flight simulation cases and good agreement is achieved. Accordingly, this correlation is proposed for further investigation on other short hypersonic protuberances for estimating heating augmentation.

  11. Computational Design of a Thermostable Mutant of Cocaine Esterase via Molecular Dynamics Simulations

    PubMed Central

    Huang, Xiaoqin; Gao, Daquan; Zhan, Chang-Guo

    2015-01-01

    Cocaine esterase (CocE) has been known as the most efficient native enzyme for metabolizing the naturally occurring cocaine. A major obstacle to the clinical application of CocE is the thermoinstability of native CocE with a half-life of only ~11 min at physiological temperature (37°C). It is highly desirable to develop a thermostable mutant of CocE for therapeutic treatment of cocaine overdose and addiction. To establish a structure-thermostability relationship, we carried out molecular dynamics (MD) simulations at 400 K on wild-type CocE and previously known thermostable mutants, demonstrating that the thermostability of the active form of the enzyme correlates with the fluctuation (characterized as the RMSD and RMSF of atomic positions) of the catalytic residues (Y44, S117, Y118, H287, and D259) in the simulated enzyme. In light of the structure-thermostability correlation, further computational modeling including MD simulations at 400 K predicted that the active site structure of the L169K mutant should be more thermostable. The prediction has been confirmed by wet experimental tests showing that the active form of the L169K mutant had a half-life of 570 min at 37°C, which is significantly longer than those of the wild-type and previously known thermostable mutants. The encouraging outcome suggests that the high-temperature MD simulations and the structure-thermostability may be considered as a valuable tool for computational design of thermostable mutants of an enzyme. PMID:21373712

  12. Bronchiectasis: correlation of high-resolution CT findings with health-related quality of life.

    PubMed

    Eshed, I; Minski, I; Katz, R; Jones, P W; Priel, I E

    2007-02-01

    To evaluate the relationship between the severity of bronchiectatic diseases, as evident on high-resolution computed tomography (HRCT) and the patient's quality of life measured using the St George's Respiratory Questionnaire (SGRQ). Forty-six patients (25 women, 21 men, mean age: 63 years) with bronchiectatic disease as evident on recent HRCT examinations were recruited. Each patient completed the SGRQ and underwent respiratory function tests. HRCT findings were blindly and independently scored by two radiologists, using the modified Bhalla scoring system. The relationships between HRCT scores, SGRQ scores and pulmonary function tests were evaluated. The patients' total CT score did not correlate with the SGRQ scores. However, patients with more advanced disease on HRCT, significantly differed in their SGRQ scores from patients with milder bronchiectatic disease. A significant correlation was found between the CT scores for the middle and distal lung zones and the activity, impacts and total SGRQ scores. No correlation was found between CT scores and respiratory function test indices. However, a significant correlation was found between the SGRQ scores and most of the respiratory function test indices. A correlation between the severity of bronchiectatic disease as expressed in HRCT and the health-related quality of life exists in patients with a more severe bronchiectatic disease but not in patients with mild disease. Such correlation depends on the location of the bronchiectasis in the pulmonary tree.

  13. Fully integrated sub 100ps photon counting platform

    NASA Astrophysics Data System (ADS)

    Buckley, S. J.; Bellis, S. J.; Rosinger, P.; Jackson, J. C.

    2007-02-01

    Current state of the art high resolution counting modules, specifically designed for high timing resolution applications, are largely based on a computer card format. This has tended to result in a costly solution that is restricted to the computer it resides in. We describe a four channel timing module that interfaces to a computer via a USB port and operates with a resolution of less than 100 picoseconds. The core design of the system is an advanced field programmable gate array (FPGA) interfacing to a precision time interval measurement module, mass memory block and a high speed USB 2.0 serial data port. The FPGA design allows the module to operate in a number of modes allowing both continuous recording of photon events (time-tagging) and repetitive time binning. In time-tag mode the system reports, for each photon event, the high resolution time along with the chronological time (macro time) and the channel ID. The time-tags are uploaded in real time to a host computer via a high speed USB port allowing continuous storage to computer memory of up to 4 millions photons per second. In time-bin mode, binning is carried out with count rates up to 10 million photons per second. Each curve resides in a block of 128,000 time-bins each with a resolution programmable down to less than 100 picoseconds. Each bin has a limit of 65535 hits allowing autonomous curve recording until a bin reaches the maximum count or the system is commanded to halt. Due to the large memory storage, several curves/experiments can be stored in the system prior to uploading to the host computer for analysis. This makes this module ideal for integration into high timing resolution specific applications such as laser ranging and fluorescence lifetime imaging using techniques such as time correlated single photon counting (TCSPC).

  14. General Rule of Negative Effective Ueff System & Materials Design of High-Tc Superconductors by ab initio Calculations

    NASA Astrophysics Data System (ADS)

    Katayama-Yoshida, Hiroshi; Nakanishi, Akitaka; Uede, Hiroki; Takawashi, Yuki; Fukushima, Tetsuya; Sato, Kazunori

    2014-03-01

    Based upon ab initio electronic structure calculation, I will discuss the general rule of negative effective U system by (1) exchange-correlation-induced negative effective U caused by the stability of the exchange-correlation energy in Hund's rule with high-spin ground states of d5 configuration, and (2) charge-excitation-induced negative effective U caused by the stability of chemical bond in the closed-shell of s2, p6, and d10 configurations. I will show the calculated results of negative effective U systems such as hole-doped CuAlO2 and CuFeS2. Based on the total energy calculations of antiferromagnetic and ferromagnetic states, I will discuss the magnetic phase diagram and superconductivity upon hole doping. I also discuss the computational materials design method of high-Tc superconductors by ab initio calculation to go beyond LDA and multi-scale simulations.

  15. Linear free-energy relationships between a single gas-phase ab initio equilibrium bond length and experimental pKa values in aqueous solution.

    PubMed

    Alkorta, Ibon; Popelier, Paul L A

    2015-02-02

    Remarkably simple yet effective linear free energy relationships were discovered between a single ab initio computed bond length in the gas phase and experimental pKa values in aqueous solution. The formation of these relationships is driven by chemical features such as functional groups, meta/para substitution and tautomerism. The high structural content of the ab initio bond length makes a given data set essentially divide itself into high correlation subsets (HCSs). Surprisingly, all molecules in a given high correlation subset share the same conformation in the gas phase. Here we show that accurate pKa values can be predicted from such HCSs. This is achieved within an accuracy of 0.2 pKa units for 5 drug molecules. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Noncommutative Wilson lines in higher-spin theory and correlation functions of conserved currents for free conformal fields

    NASA Astrophysics Data System (ADS)

    Bonezzi, Roberto; Boulanger, Nicolas; De Filippi, David; Sundell, Per

    2017-11-01

    We first prove that, in Vasiliev’s theory, the zero-form charges studied in Sezgin E and Sundell P 2011 (arXiv:1103.2360 [hep-th]) and Colombo N and Sundell P 20 (arXiv:1208.3880 [hep-th]) are twisted open Wilson lines in the noncommutative Z space. This is shown by mapping Vasiliev’s higher-spin model on noncommutative Yang-Mills theory. We then prove that, prior to Bose-symmetrising, the cyclically-symmetric higher-spin invariants given by the leading order of these n-point zero-form charges are equal to corresponding cyclically-invariant building blocks of n-point correlation functions of bilinear operators in free conformal field theories (CFT) in three dimensions. On the higher spin gravity side, our computation reproduces the results of Didenko V and Skvortsov E 2013 J. High Energy Phys. JHEP04(2013)158 using an alternative method amenable to the computation of subleading corrections obtained by perturbation theory in normal order. On the free CFT side, our proof involves the explicit computation of the separate cyclic building blocks of the correlation functions of n conserved currents in arbitrary dimension d>2 using polarization vectors, which is an original result. It is shown to agree, for d=3 , with the results obtained in Gelfond O A and Vasiliev M A 2013 Nucl. Phys. B 876 871-917 in various dimensions and where polarization spinors were used.

  17. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  18. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  19. "Time-dependent flow-networks"

    NASA Astrophysics Data System (ADS)

    Tupikina, Liubov; Molkentin, Nora; Lopez, Cristobal; Hernandez-Garcia, Emilio; Marwan, Norbert; Kurths, Jürgen

    2015-04-01

    Complex networks have been successfully applied to various systems such as society, technology, and recently climate. Links in a climate network are defined between two geographical locations if the correlation between the time series of some climate variable is higher than a threshold. Therefore, network links are considered to imply information or heat exchange. However, the relationship between the oceanic and atmospheric flows and the climate network's structure is still unclear. Recently, a theoretical approach verifying the correlation between ocean currents and surface air temperature networks has been introduced, where the Pearson correlation networks were constructed from advection-diffusion dynamics on an underlying flow. Since the continuous approach has its limitations, i.e. high computational complexity and fixed variety of the flows in the underlying system, we introduce a new, method of flow-networks for changing in time velocity fields including external forcing in the system, noise and temperature-decay. Method of the flow-network construction can be divided into several steps: first we obtain the linear recursive equation for the temperature time-series. Then we compute the correlation matrix for time-series averaging the tensor product over all realizations of the noise, which we interpret as a weighted adjacency matrix of the flow-network and analyze using network measures. We apply the method to different types of moving flows with geographical relevance such as meandering flow. Analyzing the flow-networks using network measures we find that our approach can highlight zones of high velocity by degree and transition zones by betweenness, while the combination of these network measures can uncover how the flow propagates within time. Flow-networks can be powerful tool to understand the connection between system's dynamics and network's topology analyzed using network measures in order to shed light on different climatic phenomena.

  20. Rheumatoid arthritis-associated interstitial lung disease: lung inflammation evaluated with high resolution computed tomography scan is correlated to rheumatoid arthritis disease activity.

    PubMed

    Pérez-Dórame, Renzo; Mejía, Mayra; Mateos-Toledo, Heidegger; Rojas-Serrano, Jorge

    2015-01-01

    To describe the association between rheumatoid arthritis disease activity (RA) and interstitial lung damage (inflammation and fibrosis), in a group of patients with rheumatoid arthritis-associated interstitial lung disease (RA-ILD). A retrospective study of RA patients with interstitial lung disease (restrictive pattern in lung function tests and evidence of interstitial lung disease in high resolution computed tomography (HRCT)). Patients were evaluated to exclude other causes of pulmonary disease. RA disease activity was measured with the CDAI index. Interstitial lung inflammation and fibrosis were determined by Kazerooni scale. We compared Kazerooni ground-glass score with the nearest CDAI score to HRCT date scan of the first medical evaluation at our institution. In nine patients, we compared the first ground-glass score with a second one after treatment with DMARDs and corticosteroids. Spearman's rank correlation coefficient was used to evaluate association between RA disease activity and the Kazerooni ground-glass and fibrosis scores. Thirty-four patients were included. A positive correlation between CDAI and ground-glass scores was found (rs=0.3767, P<0.028). Fibrosis and CDAI scores were not associated (rs=-0.0747, P<0.6745). After treatment, a downward tendency in the ground-glass score was observed (median [IQR]): (2.33 [2,3] vs. 2 [1.33-2.16]), P<0.056, along with a lesser CDAI score (27 [8-43] vs. 9 [5-12]), P<0.063. There is a correlation between RA disease activity and ground-glass appearance in the HRCT of RA-ILD patients. These results suggest a positive association between RA disease activity and lung inflammation in RA-ILD. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  1. Portable multiplicity counter

    DOEpatents

    Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM

    2009-09-01

    A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.

  2. Heat transfer, velocity-temperature correlation, and turbulent shear stress from Navier-Stokes computations of shock wave/turbulent boundary layer interaction flows

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Hingst, W. R.; Porro, A. R.

    1991-01-01

    The properties of 2-D shock wave/turbulent boundary layer interaction flows were calculated by using a compressible turbulent Navier-Stokes numerical computational code. Interaction flows caused by oblique shock wave impingement on the turbulent boundary layer flow were considered. The oblique shock waves were induced with shock generators at angles of attack less than 10 degs in supersonic flows. The surface temperatures were kept at near-adiabatic (ratio of wall static temperature to free stream total temperature) and cold wall (ratio of wall static temperature to free stream total temperature) conditions. The computational results were studied for the surface heat transfer, velocity temperature correlation, and turbulent shear stress in the interaction flow fields. Comparisons of the computational results with existing measurements indicated that (1) the surface heat transfer rates and surface pressures could be correlated with Holden's relationship, (2) the mean flow streamwise velocity components and static temperatures could be correlated with Crocco's relationship if flow separation did not occur, and (3) the Baldwin-Lomax turbulence model should be modified for turbulent shear stress computations in the interaction flows.

  3. Workstyle risk factors for work related musculoskeletal symptoms among computer professionals in India.

    PubMed

    Sharan, Deepak; Parijat, Prakriti; Sasidharan, Ajeesh Padinjattethil; Ranganathan, Rameshkumar; Mohandoss, Mathankumar; Jose, Jeena

    2011-12-01

    Work-related musculoskeletal disorders are common in computer professionals. Workstyle may be one of the risk factors in the development of musculoskeletal discomfort. The objective of this retrospective study was to examine the prevalence of adverse workstyle in computer professionals from India and to evaluate if workstyle factors were predictors of pain and loss of productivity. Office workers from various information technology (IT) companies in India responded to the short-form workstyle questionnaire and pain questionnaire. Correlation analyses were conducted to examine the associations between different variables followed by a multivariate logistic regression to understand the unique predictors of pain and loss of productivity. 4,500 participants responded to the workstyle and pain questionnaire. 22% of participants were reported to have a high risk of an adverse workstyle. 63% of participants reported pain symptoms. Social reactivity, lack of breaks, and deadlines/pressure subscales of workstyle questionnaire were significantly correlated with pain and loss of productivity. Regression analyses revealed that workstyle factors and duration of computer use per day were significant predictors of pain. Workstyle seems to be a mediating factor for musculoskeletal pain, discomfort, and loss of productivity. Based on the study findings, it is recommended that intervention efforts directed towards prevention of musculoskeletal disorders should focus on psychosocial work factors such adverse workstyle in addition to biomechanical risk factors.

  4. Simultaneous epicardial and noncontact endocardial mapping of the canine right atrium: simulation and experiment.

    PubMed

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.

  5. Simultaneous Epicardial and Noncontact Endocardial Mapping of the Canine Right Atrium: Simulation and Experiment

    PubMed Central

    Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent

    2014-01-01

    Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778

  6. Ab initio density-functional calculations in materials science: from quasicrystals over microporous catalysts to spintronics.

    PubMed

    Hafner, Jürgen

    2010-09-29

    During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.

  7. Brain tumor segmentation with Vander Lugt correlator based active contour.

    PubMed

    Essadike, Abdelaziz; Ouabida, Elhoussaine; Bouzid, Abdenbi

    2018-07-01

    The manual segmentation of brain tumors from medical images is an error-prone, sensitive, and time-absorbing process. This paper presents an automatic and fast method of brain tumor segmentation. In the proposed method, a numerical simulation of the optical Vander Lugt correlator is used for automatically detecting the abnormal tissue region. The tumor filter, used in the simulated optical correlation, is tailored to all the brain tumor types and especially to the Glioblastoma, which considered to be the most aggressive cancer. The simulated optical correlation, computed between Magnetic Resonance Images (MRI) and this filter, estimates precisely and automatically the initial contour inside the tumorous tissue. Further, in the segmentation part, the detected initial contour is used to define an active contour model and presenting the problematic as an energy minimization problem. As a result, this initial contour assists the algorithm to evolve an active contour model towards the exact tumor boundaries. Equally important, for a comparison purposes, we considered different active contour models and investigated their impact on the performance of the segmentation task. Several images from BRATS database with tumors anywhere in images and having different sizes, contrast, and shape, are used to test the proposed system. Furthermore, several performance metrics are computed to present an aggregate overview of the proposed method advantages. The proposed method achieves a high accuracy in detecting the tumorous tissue by a parameter returned by the simulated optical correlation. In addition, the proposed method yields better performance compared to the active contour based methods with the averages of Sensitivity=0.9733, Dice coefficient = 0.9663, Hausdroff distance = 2.6540, Specificity = 0.9994, and faster with a computational time average of 0.4119 s per image. Results reported on BRATS database reveal that our proposed system improves over the recently published state-of-the-art methods in brain tumor detection and segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Exponential smoothing weighted correlations

    NASA Astrophysics Data System (ADS)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  9. Using an Extended Kalman Filter Learning Algorithm for Feed-Forward Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, David J.; Mussa, Yussuf

    2004-01-01

    In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.

  10. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  11. Speeding up 3D speckle tracking using PatchMatch

    NASA Astrophysics Data System (ADS)

    Zontak, Maria; O'Donnell, Matthew

    2016-03-01

    Echocardiography provides valuable information to diagnose heart dysfunction. A typical exam records several minutes of real-time cardiac images. To enable complete analysis of 3D cardiac strains, 4-D (3-D+t) echocardiography is used. This results in a huge dataset and requires effective automated analysis. Ultrasound speckle tracking is an effective method for tissue motion analysis. It involves correlation of a 3D kernel (block) around a voxel with kernels in later frames. The search region is usually confined to a local neighborhood, due to biomechanical and computational constraints. For high strains and moderate frame-rates, however, this search region will remain large, leading to a considerable computational burden. Moreover, speckle decorrelation (due to high strains) leads to errors in tracking. To solve this, spatial motion coherency between adjacent voxels should be imposed, e.g., by averaging their correlation functions.1 This requires storing correlation functions for neighboring voxels, thus increasing memory demands. In this work, we propose an efficient search using PatchMatch, 2 a powerful method to find correspondences between images. Here we adopt PatchMatch for 3D volumes and radio-frequency signals. As opposed to an exact search, PatchMatch performs random sampling of the search region and propagates successive matches among neighboring voxels. We show that: 1) Inherently smooth offset propagation in PatchMatch contributes to spatial motion coherence without any additional processing or memory demand. 2) For typical scenarios, PatchMatch is at least 20 times faster than the exact search, while maintaining comparable tracking accuracy.

  12. Correlation of radiation dose and heart rate in dual-source computed tomography coronary angiography.

    PubMed

    Laspas, Fotios; Tsantioti, Dimitra; Roussakis, Arkadios; Kritikos, Nikolaos; Efthimiadou, Roxani; Kehagias, Dimitrios; Andreou, John

    2011-04-01

    Computed tomography coronary angiography (CTCA) has been widely used since the introduction of 64-slice scanners and dual-source CT technology, but the relatively high radiation dose remains a major concern. To evaluate the relationship between radiation exposure and heart rate (HR), in dual-source CTCA. Data from 218 CTCA examinations, performed with a dual-source 64-slices scanner, were statistically evaluated. Effective radiation dose, expressed in mSv, was calculated as the product of the dose-length product (DLP) times a conversion coefficient for the chest (mSv = DLPx0.017). Heart rate range and mean heart rate, expressed in beats per minute (bpm) of each individual during CTCA, were also provided by the system. Statistical analysis of effective dose and heart rate data was performed by using Pearson correlation coefficient and two-sample t-test. Mean HR and effective dose were found to have a borderline positive relationship. Individuals with a mean HR >65 bpm observed to receive a statistically significant higher effective dose as compared to those with a mean HR ≤65 bpm. Moreover, a strong correlation between effective dose and variability of HR of more than 20 bpm was observed. Dual-source CT scanners are considered to have the capability to provide diagnostic examinations even with high HR and arrhythmias. However, it is desirable to keep the mean heart rate below 65 bpm and heart rate fluctuation less than 20 bpm in order to reduce the radiation exposure.

  13. Structural and vibrational properties of transition-metal oxides from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Cococcioni, M.; Floris, A.; Himmetoglu, B.

    2010-12-01

    The calculation of the vibrational spectrum of minerals is of fundamental importance to assess their behavior (e.g. their elastic properties, or possible structural phase transitions) under the high-temperature, high-pressure conditions of the Earth’s interior. The ubiquitous presence of transition metals and the consequent importance of electronic correlations make the study of these materials quite difficult to approach with approximate DFT functionals (as LDA or GGA). The DFT+U, consisting in a Hubbard-modeled correction to the DFT energy functionals, has been successfully used to study the electronic, structural, and magnetic properties of several Fe-bearing minerals. However, the vibrational spectrum of these systems has never been determined entirely (frozen- phonon techniques are overly expensive except for zone-center phonons). In this work we introduce the extension of Density-Functional-Perturbation-Theory to DFT+U, that allows to efficiently compute the phonon spectrum of transition-metal compounds from their correlated ground states. A comparative analysis between the vibrational properties of MnO, FeO, CoO, and NiO (in the undistorted cubic cell) highlights a marked dependence of several features of their phonon spectrum on the occupancy of localized d orbitals and thus, on elec- tronic correlation. The new computational tool is also employed to evaluate the rhombohedral distortion of FeO (particularly abundant in the Earth’s lower mantle) and to assess the stability of its B1 phase in different conditions of pressure and temperature.

  14. Accuracy of laser-scanned models compared to plaster models and cone-beam computed tomography.

    PubMed

    Kim, Jooseong; Heo, Giseon; Lagravère, Manuel O

    2014-05-01

    To compare the accuracy of measurements obtained from the three-dimensional (3D) laser scans to those taken from the cone-beam computed tomography (CBCT) scans and those obtained from plaster models. Eighteen different measurements, encompassing mesiodistal width of teeth and both maxillary and mandibular arch length and width, were selected using various landmarks. CBCT scans and plaster models were prepared from 60 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner, and the selected landmarks were measured using its software. CBCT scans were imported and analyzed using the Avizo software, and the 26 landmarks corresponding to the selected measurements were located and recorded. The plaster models were also measured using a digital caliper. Descriptive statistics and intraclass correlation coefficient (ICC) were used to analyze the data. The ICC result showed that the values obtained by the three different methods were highly correlated in all measurements, all having correlations>0.808. When checking the differences between values and methods, the largest mean difference found was 0.59 mm±0.38 mm. In conclusion, plaster models, CBCT models, and laser-scanned models are three different diagnostic records, each with its own advantages and disadvantages. The present results showed that the laser-scanned models are highly accurate to plaster models and CBCT scans. This gives general clinicians an alternative to take into consideration the advantages of laser-scanned models over plaster models and CBCT reconstructions.

  15. Novel computational approach for studying ph effects, excluded volume and ion-ion correlations in electrical double layers around polyelectrolytes and nanoparticles

    NASA Astrophysics Data System (ADS)

    Ovanesyan, Zaven

    Highly charged cylindrical and spherical objects (macroions) are probably the simplest structures for modeling nucleic acids, proteins and nanoparticles. Their ubiquitous presence within biophysical systems ensures that Coulomb forces are among the most important interactions that regulate the behavior of these systems. In these systems, ions position themselves in a strongly correlated manner near the surface of a macroion and form electrical double layers (EDLs). These EDLs play an important role in many biophysical and biochemical processes. For instance, the macroion's net charge can change due to the binding of many multivalent ions to its surface. Thus, proper description of EDLs near the surface of a macroion may reveal a counter-intuitive charge inversion behavior, which can generate attraction between like-charged objects. This is relevant for the variety of fields such as self-assembly of DNA and RNA folding, as well as for protein aggregation and neurodegenerative diseases. Certainly, the key factors that contribute to these phenomena cannot be properly understood without an accurate solvation model. With recent advancements in computer technologies, the possibility to use computational tools for fundamental understanding of the role of EDLs around biomolecules and nanoparticles on their physical and chemical properties is becoming more feasible. Establishing the impact of the excluded volume and ion-ion correlations, ionic strength and pH of the electrolyte on the EDL around biomolecules and nanoparticles, and how changes in these properties consequently affect the Zeta potential and surface charge density are still not well understood. Thus, modeling and understanding the role of these properties on EDLs will provide more insights on the stability, adsorption, binding and function of biomolecules and nanoparticles. Existing mean-field theories such as Poisson Boltzmann (PB) often neglect the ion-ion correlations, solvent and ion excluded volume effects, which are important details for proper description of EDL properties. In this thesis, we implement an efficient and accurate classical solvation density functional theory (CDSFT) for EDLs of spherical macroions and cylindrical polyelectrolytes embedded in aqueous electrolytes. This approach extends the capabilities of mean field approximations by taking into account electrostatic ion-ion correlations, size asymmetry and excluded volume effects without compromising the computational cost. We apply the computational tool to study the structural and thermodynamic properties of the ionic atmosphere around B-DNA and spherical nanoparticles. We demonstrate that the presence of solvent molecules at experimental concentration and size values has a significant impact on the layering of ions. This layering directly influences the integrated charge and mean electrostatic potential in the diffuse region of the spherical electrical double layer (SEDL) and have a noticeable impact on the behavior of zeta potential (ZP). Recently, we have extended the aforementioned CSDFT to account for the charge-regulated mechanisms of the macroion surface on the structural and thermodynamic properties of spherical EDLs. In the approach, the CSDFT is combined with a surface complexation model to account for ion correlation and excluded volume effects on the surface titration of spherical macroions. We apply the proposed computational approach to describe the role that the ion size and solvent excluded volume play on the surface titration properties of silica nanoparticles. We analyze the effects of the nanoparticle size, pH and salt concentration of the aqueous solution on the nanoparticle's surface charge and zeta potential. The results reveal that surface charge density and zeta potential significantly depend on excluded volume and ion-ion correlation effects as well as on pH for monovalent ion species at high salt concentrations. Overall, our results are in good agreement with Monte Carlo simulations and available experimental data. We discuss future directions of this work, which includes extension of the solvation model for studying the flexibility properties of rigid peptides and globular proteins, and describes benefits that this research can potentially bring to scientific and non scientific communities.

  16. Computational analysis of high resolution unsteady airloads for rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C.-M. Gordon; Wachspress, Daniel A.; Bliss, Donald B.

    1994-01-01

    The study of helicopter aerodynamic loading for acoustics applications requires the application of efficient yet accurate simulations of the velocity field induced by the rotor's vortex wake. This report summarizes work to date on the development of such an analysis, which builds on the Constant Vorticity Contour (CVC) free wake model, previously implemented for the study of vibratory loading in the RotorCRAFT computer code. The present effort has focused on implementation of an airload reconstruction approach that computes high resolution airload solutions of rotor/rotor-wake interactions required for acoustics computations. Supplementary efforts on the development of improved vortex core modeling, unsteady aerodynamic effects, higher spatial resolution of rotor loading, and fast vortex wake implementations have substantially enhanced the capabilities of the resulting software, denoted RotorCRAFT/AA (AeroAcoustics). Results of validation calculations using recently acquired model rotor data show that by employing airload reconstruction it is possible to apply the CVC wake analysis with temporal and spatial resolution suitable for acoustics applications while reducing the computation time required by one to two orders of magnitude relative to that required by direct calculations. Promising correlation with this body of airload and noise data has been obtained for a variety of rotor configurations and operating conditions.

  17. A Computer-Aided Analysis Method of SPECT Brain Images for Quantitative Treatment Monitoring: Performance Evaluations and Clinical Applications.

    PubMed

    Zheng, Xiujuan; Wei, Wentao; Huang, Qiu; Song, Shaoli; Wan, Jieqing; Huang, Gang

    2017-01-01

    The objective and quantitative analysis of longitudinal single photon emission computed tomography (SPECT) images are significant for the treatment monitoring of brain disorders. Therefore, a computer aided analysis (CAA) method is introduced to extract a change-rate map (CRM) as a parametric image for quantifying the changes of regional cerebral blood flow (rCBF) in longitudinal SPECT brain images. The performances of the CAA-CRM approach in treatment monitoring are evaluated by the computer simulations and clinical applications. The results of computer simulations show that the derived CRMs have high similarities with their ground truths when the lesion size is larger than system spatial resolution and the change rate is higher than 20%. In clinical applications, the CAA-CRM approach is used to assess the treatment of 50 patients with brain ischemia. The results demonstrate that CAA-CRM approach has a 93.4% accuracy of recovered region's localization. Moreover, the quantitative indexes of recovered regions derived from CRM are all significantly different among the groups and highly correlated with the experienced clinical diagnosis. In conclusion, the proposed CAA-CRM approach provides a convenient solution to generate a parametric image and derive the quantitative indexes from the longitudinal SPECT brain images for treatment monitoring.

  18. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  19. The Correlation of Active and Passive Microwave Outputs for the Skylab S-193 Sensor

    NASA Technical Reports Server (NTRS)

    Krishen, K.

    1976-01-01

    This paper presents the results of the correlation analysis of the Skylab S-193 13.9 GHz Radiometer/Scatterometer data. Computer analysis of the S-193 data shows more than 50 percent of the radiometer and scatterometer data are uncorrelated. The correlation coefficients computed for the data gathered over various ground scenes indicates the desirability of using both active and passive sensors for the determination of various Earth phenomena.

  20. Increasing complexity with quantum physics.

    PubMed

    Anders, Janet; Wiesner, Karoline

    2011-09-01

    We argue that complex systems science and the rules of quantum physics are intricately related. We discuss a range of quantum phenomena, such as cryptography, computation and quantum phases, and the rules responsible for their complexity. We identify correlations as a central concept connecting quantum information and complex systems science. We present two examples for the power of correlations: using quantum resources to simulate the correlations of a stochastic process and to implement a classically impossible computational task.

  1. Attitudes to Technology, Perceived Computer Self-Efficacy and Computer Anxiety as Predictors of Computer Supported Education

    ERIC Educational Resources Information Center

    Celik, Vehbi; Yesilyurt, Etem

    2013-01-01

    There is a large body of research regarding computer supported education, perceptions of computer self-efficacy, computer anxiety and the technological attitudes of teachers and teacher candidates. However, no study has been conducted on the correlation between and effect of computer supported education, perceived computer self-efficacy, computer…

  2. Software Correlator for Radioastron Mission

    NASA Astrophysics Data System (ADS)

    Likhachev, Sergey F.; Kostenko, Vladimir I.; Girin, Igor A.; Andrianov, Andrey S.; Rudnitskiy, Alexey G.; Zharov, Vladimir E.

    In this paper, we discuss the characteristics and operation of Astro Space Center (ASC) software FX correlator that is an important component of space-ground interferometer for Radioastron project. This project performs joint observations of compact radio sources using 10m space radio telescope (SRT) together with ground radio telescopes at 92, 18, 6 and 1.3 cm wavelengths. In this paper, we describe the main features of space-ground VLBI data processing of Radioastron project using ASC correlator. Quality of implemented fringe search procedure provides positive results without significant losses in correlated amplitude. ASC Correlator has a computational power close to real time operation. The correlator has a number of processing modes: “Continuum”, “Spectral Line”, “Pulsars”, “Giant Pulses”,“Coherent”. Special attention is paid to peculiarities of Radioastron space-ground VLBI data processing. The algorithms of time delay and delay rate calculation are also discussed, which is a matter of principle for data correlation of space-ground interferometers. During five years of Radioastron SRT successful operation, ASC correlator showed high potential of satisfying steady growing needs of current and future ground and space VLBI science. Results of ASC software correlator operation are demonstrated.

  3. The Relationship between Family Functioning and Academic Achievement in Female High School Students of Isfahan, Iran, in 2013-2014.

    PubMed

    Rezaei-Dehaghani, Abdollah; Keshvari, Mahrokh; Paki, Somayeh

    2018-01-01

    Nowadays, the most important problem of the educational system is the vast spread of school failure. Therefore, detection of the factors leading to or preventing students' academic achievement is of utmost importance. Family function is considered to be a critical component of academic success. This study aimed to investigate the relationship between family functioning and academic achievement in high school female students in Isfahan. This descriptive correlational study was conducted through random sampling among 237 female high school students in Isfahan during school year 2013-2014. Data were collected by participants' personal characteristics and Bloom family function questionnaires. To analyze the data, descriptive statistics (mean and standard deviation) and inferential statistics (Pearson correlation and linear regression analysis) were adopted and computed using SPSS software. The results showed a significant correlation between family function (except lack of independence) and students' academic achievement ( p < 0.05). Further, among family function dimensions, expressiveness ( β = 0.235, p < 0.001), family socialization ( β = 0.219, p = 0.001), and cohesion ( β = 0.211, p = 0.001) were more reliable predictors of academic achievement. The results of this study showed that students' academic achievement is highly correlated with the performance of their families. Therefore, to improve students' educational status in cultural and educational programs, which are specified for them, family function centered plans should be at the heart of attention.

  4. Feasibility study of parallel optical correlation-decoding analysis of lightning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Descour, M.R.; Sweatt, W.C.; Elliott, G.R.

    The optical correlator described in this report is intended to serve as an attention-focusing processor. The objective is to narrowly bracket the range of a parameter value that characterizes the correlator input. The input is a waveform collected by a satellite-borne receiver. In the correlator, this waveform is simultaneously correlated with an ensemble of ionosphere impulse-response functions, each corresponding to a different total-electron-count (TEC) value. We have found that correlation is an effective method of bracketing the range of TEC values likely to be represented by the input waveform. High accuracy in a computational sense is not required of themore » correlator. Binarization of the impulse-response functions and the input waveforms prior to correlation results in a lower correlation-peak-to-background-fluctuation (signal-to-noise) ratio than the peak that is obtained when all waveforms retain their grayscale values. The results presented in this report were obtained by means of an acousto-optic correlator previously developed at SNL as well as by simulation. An optical-processor architecture optimized for 1D correlation of long waveforms characteristic of this application is described. Discussions of correlator components, such as optics, acousto-optic cells, digital micromirror devices, laser diodes, and VCSELs are included.« less

  5. Multi-Target Regression via Robust Low-Rank Learning.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo

    2018-02-01

    Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.

  6. Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul

    2003-12-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.

  7. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    PubMed

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  8. Computational substrates of norms and their violations during social exchange.

    PubMed

    Xiang, Ting; Lohrenz, Terry; Montague, P Read

    2013-01-16

    Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations.

  9. Computational Substrates of Norms and Their Violations during Social Exchange

    PubMed Central

    Xiang, Ting; Lohrenz, Terry; Montague, P. Read

    2013-01-01

    Social norms in humans constrain individual behaviors to establish shared expectations within a social group. Previous work has probed social norm violations and the feelings that such violations engender; however, a computational rendering of the underlying neural and emotional responses has been lacking. We probed norm violations using a two-party, repeated fairness game (ultimatum game) where proposers offer a split of a monetary resource to a responder who either accepts or rejects the offer. Using a norm-training paradigm where subject groups are preadapted to either high or low offers, we demonstrate that unpredictable shifts in expected offers creates a difference in rejection rates exhibited by the two responder groups for otherwise identical offers. We constructed an ideal observer model that identified neural correlates of norm prediction errors in the ventral striatum and anterior insula, regions that also showed strong responses to variance-prediction errors generated by the same model. Subjective feelings about offers correlated with these norm prediction errors, and the two signals displayed overlapping, but not identical, neural correlates in striatum, insula, and medial orbitofrontal cortex. These results provide evidence for the hypothesis that responses in anterior insula can encode information about social norm violations that correlate with changes in overt behavior (changes in rejection rates). Together, these results demonstrate that the brain regions involved in reward prediction and risk prediction are also recruited in signaling social norm violations. PMID:23325247

  10. Implementation and Testing of VLBI Software Correlation at the USNO

    NASA Technical Reports Server (NTRS)

    Fey, Alan; Ojha, Roopesh; Boboltz, Dave; Geiger, Nicole; Kingham, Kerry; Hall, David; Gaume, Ralph; Johnston, Ken

    2010-01-01

    The Washington Correlator (WACO) at the U.S. Naval Observatory (USNO) is a dedicated VLBI processor based on dedicated hardware of ASIC design. The WACO is currently over 10 years old and is nearing the end of its expected lifetime. Plans for implementation and testing of software correlation at the USNO are currently being considered. The VLBI correlation process is, by its very nature, well suited to a parallelized computing environment. Commercial off-the-shelf computer hardware has advanced in processing power to the point where software correlation is now both economically and technologically feasible. The advantages of software correlation are manifold but include flexibility, scalability, and easy adaptability to changing environments and requirements. We discuss our experience with and plans for use of software correlation at USNO with emphasis on the use of the DiFX software correlator.

  11. SU-E-J-120: Comparing 4D CT Computed Ventilation to Lung Function Measured with Hyperpolarized Xenon-129 MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, B; Chen, Q

    2015-06-15

    Purpose: To correlate ventilation parameters computed from 4D CT to ventilation, profusion, and gas exchange measured with hyperpolarized Xenon-129 MRI for a set of lung cancer patients. Methods: Hyperpolarized Xe-129 MRI lung scans were acquired for lung cancer patients, before and after radiation therapy, measuring ventilation, perfusion, and gas exchange. In the standard clinical workflow, these patients also received 4D CT scans before treatment. Ventilation was computed from 4D CT using deformable image registration (DIR). All phases of the 4D CT scan were registered using a B-spline deformable registration. Ventilation at the voxel level was then computed for each phasemore » based on a Jacobian volume expansion metric, yielding phase sorted ventilation images. Ventilation based upon 4D CT and Xe-129 MRI were co-registered, allowing qualitative visual comparison and qualitative comparison via the Pearson correlation coefficient. Results: Analysis shows a weak correlation between hyperpolarized Xe-129 MRI and 4D CT DIR ventilation, with a Pearson correlation coefficient of 0.17 to 0.22. Further work will refine the DIR parameters to optimize the correlation. The weak correlation could be due to the limitations of 4D CT, registration algorithms, or the Xe-129 MRI imaging. Continued development will refine parameters to optimize correlation. Conclusion: Current analysis yields a minimal correlation between 4D CT DIR and Xe-129 MRI ventilation. Funding provided by the 2014 George Amorino Pilot Grant in Radiation Oncology at the University of Virginia.« less

  12. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  13. Computed tomography of the anterior mediastinum in myasthemia gravis: a radiologic-pathologic correlative study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fon, G.T.; Bein, M.E.; Mancuso, A.A.

    1982-01-01

    Chest radiographs and computed tomographic (CT) scans of the mediastinum were correlated with pathologic findings of the thymus following thymectomy in 57 patients with myasthenia gravis. Based on the patient's age and the overall morphology of the anterior mediastinum, CT scans were assigned one of four grades in an attempt to predict thymus pathologic findings. Using this grading, 14 of 16 cases of thymoma were suspected or definitely diagnosed. One of the two cases not diagnosed on CT was a microscopic tumor. There were no false-positive diagnoses in 11 cases graded as definitely thymoma. We conclude that thymoma can bemore » sensitively diagnosed in patients older than 40 years of age. However, thymoma cannot be predicted with a high level of confidence in patients younger than 40 because of the difficulty in differentiating normal thymus or hyperplasia from thymoma. Recommendations for the use of CT in the preoperative evaluation of myasthenic patients are presented.« less

  14. Image analysis of pubic bone for age estimation in a computed tomography sample.

    PubMed

    López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella

    2015-03-01

    Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.

  15. The hemodynamics in intracranial aneurysm ruptured region with active contrast leakage during computed tomography angiography

    NASA Astrophysics Data System (ADS)

    Li, Ming-Lung; Wang, Yi-Chou; Liou, Tong-Miin; Lin, Chao-An

    2014-10-01

    Precise locations of rupture region under contrast agent leakage of five ruptured cerebral artery aneurysms during computed tomography angiography, which is to our knowledge for the first time, were successfully identified among 101 patients. These, together with numerical simulations based on the reconstructed aneurysmal models, were used to analyze hemodynamic parameters of aneurysms under different cardiac cyclic flow rates. For side wall type aneurysms, different inlet flow rates have mild influences on the shear stresses distributions. On the other hand, for branch type aneurysms, the predicted wall shear stress (WSS) correlates strongly with the increase of inlet vessel velocity. The mean and time averaged WSSes at rupture regions are found to be lower than those over the surface of the aneurysms. Also, the levels of the oscillatory shear index (OSI) are higher than the reported threshold value, supporting the assertion that high OSI correlates with rupture of the aneurysm. However, the present results also indicate that OSI level at the rupture region is relatively lower.

  16. Estimating affective word covariates using word association data.

    PubMed

    Van Rensbergen, Bram; De Deyne, Simon; Storms, Gert

    2016-12-01

    Word ratings on affective dimensions are an important tool in psycholinguistic research. Traditionally, they are obtained by asking participants to rate words on each dimension, a time-consuming procedure. As such, there has been some interest in computationally generating norms, by extrapolating words' affective ratings using their semantic similarity to words for which these values are already known. So far, most attempts have derived similarity from word co-occurrence in text corpora. In the current paper, we obtain similarity from word association data. We use these similarity ratings to predict the valence, arousal, and dominance of 14,000 Dutch words with the help of two extrapolation methods: Orientation towards Paradigm Words and k-Nearest Neighbors. The resulting estimates show very high correlations with human ratings when using Orientation towards Paradigm Words, and even higher correlations when using k-Nearest Neighbors. We discuss possible theoretical accounts of our results and compare our findings with previous attempts at computationally generating affective norms.

  17. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  18. New Advancements in the Study of the Uniform Electron Gas with Full Configuration Interaction Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Ruggeri, Michele; Luo, Hongjun; Alavi, Ali

    Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is able to give remarkably accurate results in the study of atoms and molecules. The study of the uniform electron gas (UEG) on the other hand has proven to be much harder, particularly in the low density regime. The source of this difficulty comes from the strong interparticle correlations that arise at low density, and essentially forbid the study of the electron gas in proximity of Wigner crystallization. We extend a previous study on the three dimensional electron gas computing the energy of a fully polarized gas for N=27 electrons at high and medium density (rS = 0 . 5 to 5 . 0). We show that even when dealing with a polarized UEG the computational cost of the study of systems with rS > 5 . 0 is prohibitive; in order to deal with correlations and to extend the density range that to be studied we introduce a basis of localized states and an effective transcorrelated Hamiltonian.

  19. Comparative analysis of two discretizations of Ricci curvature for complex networks.

    PubMed

    Samal, Areejit; Sreejith, R P; Gu, Jiao; Liu, Shiping; Saucan, Emil; Jost, Jürgen

    2018-06-05

    We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.

  20. The Correlation Between Pre and Postoperative Hearing Level with High Resolution Computed Tomography (HRCT) Findings in Congenital Canal Atresia (CAA) Patients.

    PubMed

    Asma, A; Abdul Fatah, A W; Hamzaini, A H; Mazita, A

    2013-12-01

    In managing patient with congenital congenital aural atresia (CAA), preoperative high resolution computed tomography (HRCT) scan and hearing assessment are important. A grading system based on HRCT findings was first introduced by Jahrsdoefer in order to select appropriate candidates for operation and to predict the postoperative hearing outcome in CAA patients. The score of eight and more was considered as a good prognostic factor for hearing reconstruction surgery. However previously in our center this score was not used as the criteria for surgical procedure. This study was conducted at Center A to evaluate the correlation between pre and postoperative hearing level with HRCT based on a Jahrsdoefer grading system in patients with CAA. All records and HRCT films with CAA from January 1997 until December 2007 at Center A were evaluated. The demographic data, operative records, pre and post operative hearing levels and HRCT findings were analyzed. Hearing level in this study was based on a pure tone average of air-bone gap at 500 Hz, 1 kHz and 2 kHz or hearing level obtained from auditory brainstem response eudiometry. This study was approved by Research Ethics Committee (code number, FF-197-2008). Thirty-two ears were retrospectively evaluated. The postoperative hearing level of 30 dB and less was considered as successful hearing result postoperatively. Of the six ears which underwent canalplasty, three had achieved successful hearing result. However, there was no significant correlation between preoperative hearing level (HL) with HRCT score and postoperative HL with HRCT score at 0.05 significant levels (correlation coefficient = -0.292, P = 0.105 and correlation coefficient = -0.127, P = 0.810) respectively. Hearing evaluation and HRCT temporal bone are two independent evaluations for the patients with CAA before going for hearing reconstructive surgery.

  1. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  2. Prevalence of vitamin D insufficiency among adolescents and its correlation with bone parameters using high-resolution peripheral quantitative computed tomography.

    PubMed

    Cheung, T F; Cheuk, K Y; Yu, F W P; Hung, V W Y; Ho, C S; Zhu, T Y; Ng, B K W; Lee, K M; Qin, L; Ho, S S Y; Wong, G W K; Cheng, J C Y; Lam, T P

    2016-08-01

    Vitamin D deficiency and insufficiency are highly prevalent among adolescents in Hong Kong, which is a sub-tropical city with ample sunshine. Vitamin D level is significantly correlated with key bone density and bone quality parameters. Further interventional studies are warranted to define the role of vitamin D supplementation for improvement of bone health among adolescents. The relationship between bone quality parameters and vitamin D (Vit-D) status remains undefined among adolescents. The aims of this study were to evaluate Vit-D status and its association with both bone density and bone quality parameters among adolescents. Three hundred thirty-three girls and 230 boys (12-16 years old) with normal health were recruited in summer and winter separately from local schools. Serum 25(OH) Vit-D level, bone density and quality parameters by Dual Energy X-ray Absorptiometry (DXA) and High-Resolution peripheral Quantitative Computed Tomography (HR-pQCT), dietary calcium intake, and physical activity level were assessed. Sixty-four point seven percent and 11.4 % of subjects were insufficient [25 ≤ 25(OH)Vit-D ≤ 50 nmol/L] and deficient [25(OH)Vit-D < 25 nmol/L] in Vit-D, respectively. The mean level of serum 25(OH)Vit-D in summer was significantly higher than that in winter (44.7 ± 13.6 and 35.9 ± 12.6 nmol/L, respectively) without obvious gender difference. In girls, areal bone mineral density (aBMD) and bone mineral content (BMC) of bilateral femoral necks, cortical area, cortical thickness, total volumetric bone mineral density (vBMD), and trabecular thickness were significantly correlated with 25(OH)Vit-D levels. In boys, aBMD of bilateral femoral necks, BMC of the dominant femoral neck, cortical area, cortical thickness, total vBMD, trabecular vBMD, BV/TV, and trabecular separation were significantly correlated with 25(OH)Vit-D levels. Vit-D insufficiency was highly prevalent among adolescents in Hong Kong with significant correlation between Vit-D levels and key bone density and bone quality parameters being detected in this study. Given that this is a cross-sectional study and causality relationship cannot be inferred, further interventional studies investigating the role of Vit-D supplementation on improving bone health among adolescents are warranted.

  3. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  4. Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making

    PubMed Central

    Huk, Alexander C.; Meister, Miriam L. R.

    2012-01-01

    A recent line of work has found remarkable success in relating perceptual decision-making and the spiking activity in the macaque lateral intraparietal area (LIP). In this review, we focus on questions about the neural computations in LIP that are not answered by demonstrations of neural correlates of psychological processes. We highlight three areas of limitations in our current understanding of the precise neural computations that might underlie neural correlates of decisions: (1) empirical questions not yet answered by existing data; (2) implementation issues related to how neural circuits could actually implement the mechanisms suggested by both extracellular neurophysiology and psychophysics; and (3) ecological constraints related to the use of well-controlled laboratory tasks and whether they provide an accurate window on sensorimotor computation. These issues motivate the adoption of a more general “encoding-decoding framework” that will be fruitful for more detailed contemplation of how neural computations in LIP relate to the formation of perceptual decisions. PMID:23087623

  5. Numerical simulation of haemodynamics and low-density lipoprotein transport in the rabbit aorta and their correlation with atherosclerotic plaque thickness

    PubMed Central

    Liu, Xiao; Zhang, Peng; Feng, Chenglong; Sun, Anqiang; Kang, Hongyan; Deng, Xiaoyan; Fan, Yubo

    2017-01-01

    Two mechanisms of shear stress and mass transport have been recognized to play an important role in the development of localized atherosclerosis. However, their relationship and roles in atherogenesis are still obscure. It is necessary to investigate quantitatively the correlation among low-density lipoproteins (LDL) transport, haemodynamic parameters and plaque thickness. We simulated blood flow and LDL transport in rabbit aorta using computational fluid dynamics and evaluated plaque thickness in the aorta of a high-fat-diet rabbit. The numerical results show that regions with high luminal LDL concentration tend to have severely negative haemodynamic environments (HEs). However, for regions with moderately and slightly high luminal LDL concentration, the relationship between LDL concentration and the above haemodynamic indicators is not clear cut. Point-by-point correlation with experimental results indicates that severe atherosclerotic plaque corresponds to high LDL concentration and seriously negative HEs, less severe atherosclerotic plaque is related to either moderately high LDL concentration or moderately negative HEs, and there is almost no atherosclerotic plaque in regions with both low LDL concentration and positive HEs. In conclusion, LDL distribution is closely linked to blood flow transport, and the synergetic effects of luminal surface LDL concentration and wall shear stress-based haemodynamic indicators may determine plaque thickness. PMID:28424305

  6. Transitions in High-Arctic Vegetation Growth Patterns and Ecosystem Productivity from 2000-2013 Tracked with Cameras

    NASA Astrophysics Data System (ADS)

    Westergaard-Nielsen, A.; Hansen, B. U.; Klosterman, S.; Pedersen, S. H.; Schmidt, N. M.; Abermann, J.; Lund, M.

    2015-12-01

    The changes in vegetation seasonality in high northern latitudes resulting from changes atmospheric temperatures and precipitation are still not well understood. Continued monitoring and research is therefore needed. In this study we use 13 years of time lapse camera data and climate data from high-Arctic Northeast Greenland to assess the seasonal response of a dwarf shrub heath, grassland, and fens to snow cover, soil moisture, and atmospheric and soil temperatures. Based on the camera data, we computed a greenness index which was subsequently used to analyze transition dates in vegetation seasonality. We show that snow cover and subsequent water from the melting snow pack is highly important for the seasonality. We found a significant advancement in start of growing season of 12 days but not a significant increase in growing season length. Both the timing and greenness index value of peak of growing season was significantly correlated to the available water in the pre-melt snow pack, mostly pronounced in vegetation with limited soil water. The end of growing season was likewise significantly correlated to the water equivalents in the pre-melt snowpack. Moreover, the vegetation greenness was highly correlated to GPP, and shifts in seasonality as tracked by the greenness index are thus expected to have direct influence on ecosystem productivity.

  7. Calibration of Axisymmetric and Quasi-1D Solvers for High Enthalpy Nozzles

    NASA Technical Reports Server (NTRS)

    Papadopoulos, P. E.; Gochberg, L. A.; Tokarcik-Polsky, S.; Venkatapathy, E.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1994-01-01

    The proposed paper will present a numerical investigation of the flow characteristics and boundary layer development in the nozzles of high enthalpy shock tunnel facilities used for hypersonic propulsion testing. The computed flow will be validated against existing experimental data. Pitot pressure data obtained at the entrance of the test cabin will be used to validate the numerical simulations. It is necessary to accurately model the facility nozzles in order to characterize the test article flow conditions. Initially the axisymmetric nozzle flow will be computed using a Navier Stokes solver for a range of reservoir conditions. The calculated solutions will be compared and calibrated against available experimental data from the DLR HEG piston-driven shock tunnel and the 16-inch shock tunnel at NASA Ames Research Center. The Reynolds number is assumed to be high enough at the throat that the boundary layer flow is assumed turbulent at this point downstream. The real gas affects will be examined. In high Mach number facilities the boundary layer is thick. Attempts will be made to correlate the boundary layer displacement thickness. The displacement thickness correlation will be used to calibrate the quasi-1D codes NENZF and LSENS in order to provide fast and efficient tools of characterizing the facility nozzles. The calibrated quasi-1D codes will be implemented to study the effects of chemistry and the flow condition variations at the test section due to small variations in the driver gas conditions.

  8. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  9. Effect of low-dose CT and iterative reconstruction on trabecular bone microstructure assessment

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Baum, Thomas; Nasirudin, Radin A.; Mei, Kai; Garcia, Eduardo G.; Burgkart, Rainer; Rummeny, Ernst J.; Bauer, Jan S.; Noël, Peter B.

    2016-03-01

    The trabecular bone microstructure is an important factor in the development of osteoporosis. It is well known that its deterioration is one effect when osteoporosis occurs. Previous research showed that the analysis of trabecular bone microstructure enables more precise diagnoses of osteoporosis compared to a sole measurement of the mineral density. Microstructure parameters are assessed on volumetric images of the bone acquired either with high-resolution magnetic resonance imaging, high-resolution peripheral quantitative computed tomography or high-resolution computed tomography (CT), with only CT being applicable to the spine, which is one of clinically most relevant fracture sites. However, due to the high radiation exposure for imaging the whole spine these measurements are not applicable in current clinical routine. In this work, twelve vertebrae from three different donors were scanned with standard and low radiation dose. Trabecular bone microstructure parameters were assessed for CT images reconstructed with statistical iterative reconstruction (SIR) and analytical filtered backprojection (FBP). The resulting structure parameters were correlated to the biomechanically determined fracture load of each vertebra. Microstructure parameters assessed for low-dose data reconstructed with SIR significantly correlated with fracture loads as well as parameters assessed for standard-dose data reconstructed with FBP. Ideal results were achieved with low to zero regularization strength yielding microstructure parameters not significantly different from those assessed for standard-dose FPB data. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  10. Spatiotemporal characterization of current and future droughts in the High Atlas basins (Morocco)

    NASA Astrophysics Data System (ADS)

    Zkhiri, Wiam; Tramblay, Yves; Hanich, Lahoucine; Jarlan, Lionel; Ruelland, Denis

    2018-02-01

    Over the past decades, drought has become a major concern in Morocco due to the importance of agriculture in the economy of the country. In the present work, the standardized precipitation index (SPI) is used to monitor the evolution, frequency, and severity of droughts in the High Atlas basins (N'Fis, Ourika, Rhéraya, Zat, and R'dat), located south of Marrakech city. The spatiotemporal characterization of drought in these basins is performed by computing the SPI with precipitation spatially interpolated over the catchments. The Haouz plain, located downstream of these basins, is strongly dependent on water provided by the mountain ranges, as shown by the positive correlations between the normalized difference vegetation index (NDVI) in the plain and the 3, 6, and 12-month SPI in the High Atlas catchments. On the opposite, no significant correlations are found with piezometric levels of the Haouz groundwater due to intensified pumping for irrigation in the recent decades. A relative SPI index was computed to evaluate the climate change impacts on drought occurrence, based on the projected precipitation (2006-2100) from five high-resolution CORDEX regional climate simulations, under two emission scenarios (RCP 4.5 and RCP 8.5). These models show a decrease in precipitation towards the future up to - 65% compared to the historical period. In terms of drought events, the future projections indicate a strong increase in the frequency of SPI events below - 2, considered as severe drought condition.

  11. Rational Design of Highly Potent and Slow-Binding Cytochrome bc1 Inhibitor as Fungicide by Computational Substitution Optimization

    PubMed Central

    Hao, Ge-Fei; Yang, Sheng-Gang; Huang, Wei; Wang, Le; Shen, Yan-Qing; Tu, Wen-Long; Li, Hui; Huang, Li-Shar; Wu, Jia-Wei; Berry, Edward A.; Yang, Guang-Fu

    2015-01-01

    Hit to lead (H2L) optimization is a key step for drug and agrochemical discovery. A critical challenge for H2L optimization is the low efficiency due to the lack of predictive method with high accuracy. We described a new computational method called Computational Substitution Optimization (CSO) that has allowed us to rapidly identify compounds with cytochrome bc1 complex inhibitory activity in the nanomolar and subnanomolar range. The comprehensively optimized candidate has proved to be a slow binding inhibitor of bc1 complex, ~73-fold more potent (Ki = 4.1 nM) than the best commercial fungicide azoxystrobin (AZ; Ki = 297.6 nM) and shows excellent in vivo fungicidal activity against downy mildew and powdery mildew disease. The excellent correlation between experimental and calculated binding free-energy shifts together with further crystallographic analysis confirmed the prediction accuracy of CSO method. To the best of our knowledge, CSO is a new computational approach to substitution-scanning mutagenesis of ligand and could be used as a general strategy of H2L optimisation in drug and agrochemical design.

  12. Stuck on Screens: Patterns of Computer and Gaming Station Use in Youth Seen in a Psychiatric Clinic

    PubMed Central

    Baer, Susan; Bogusz, Elliot; Green, David A.

    2011-01-01

    Objective: Computer and gaming-station use has become entrenched in the culture of our youth. Parents of children with psychiatric disorders report concerns about overuse, but research in this area is limited. The goal of this study is to evaluate computer/gaming-station use in adolescents in a psychiatric clinic population and to examine the relationship between use and functional impairment. Method: 102 adolescents, ages 11–17, from out-patient psychiatric clinics participated. Amount of computer/gaming-station use, type of use (gaming or non-gaming), and presence of addictive features were ascertained along with emotional/functional impairment. Multivariate linear regression was used to examine correlations between patterns of use and impairment. Results: Mean screen time was 6.7±4.2 hrs/day. Presence of addictive features was positively correlated with emotional/functional impairment. Time spent on computer/gaming-station use was not correlated overall with impairment after controlling for addictive features, but non-gaming time was positively correlated with risky behavior in boys. Conclusions: Youth with psychiatric disorders are spending much of their leisure time on the computer/gaming-station and a substantial subset show addictive features of use which is associated with impairment. Further research to develop measures and to evaluate risk is needed to identify the impact of this problem. PMID:21541096

  13. Differential brain growth in the infant born preterm: current knowledge and future developments from brain imaging.

    PubMed

    Counsell, Serena J; Boardman, James P

    2005-10-01

    Preterm birth is associated with a high prevalence of neuropsychiatric impairment in childhood and adolescence, but the neural correlates underlying these disorders are not fully understood. Quantitative magnetic resonance imaging techniques have been used to investigate subtle differences in cerebral growth and development among children and adolescents born preterm or with very low birth weight. Diffusion tensor imaging and computer-assisted morphometric techniques (including voxel-based morphometry and deformation-based morphometry) have identified abnormalities in tissue microstructure and cerebral morphology among survivors of preterm birth at different ages, and some of these alterations have specific functional correlates. This chapter reviews the literature reporting differential brain development following preterm birth, with emphasis on the morphological changes that correlate with neuropsychiatric impairment.

  14. Two-point correlators revisited: fast and slow scales in multifield models of inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghersi, José T. Gálvez; Frolov, Andrei V., E-mail: joseg@sfu.ca, E-mail: frolov@sfu.ca

    2017-05-01

    We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing methods to calculate primordial spectra. We present a description motivated by the separation of the fast and slow evolving components of the spectrum which is based on Cholesky decomposition of the field correlator matrix. Our purpose is to rewrite all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required tomore » reproduce the main distinctive features in the power spectrum for each specific model of inflation.« less

  15. Ares I-X Post Flight Ignition Overpressure Review

    NASA Technical Reports Server (NTRS)

    Alvord, David A.

    2010-01-01

    Ignition Overpressure (IOP) is an unsteady fluid flow and acoustic phenomena caused by the rapid expansion of gas from the rocket nozzle within a ducted launching space resulting in an initially higher amplitude pressure wave. This wave is potentially dangerous to the structural integrity of the vehicle. An in-depth look at the IOP environments resulting from the Ares I-X Solid Rocket Booster configuration showed high correlation between the pre-flight predictions and post-flight analysis results. Correlation between the chamber pressure and IOP transients showed successful acoustic mitigation, containing the strongest IOP waves below the Mobile Launch Pad deck. The flight data allowed subsequent verification and validation of Ares I-X unsteady fluid ducted launcher predictions, computational fluid dynamic models, and strong correlation with historical Shuttle data.

  16. Computational architecture of the yeast regulatory network

    NASA Astrophysics Data System (ADS)

    Maslov, Sergei; Sneppen, Kim

    2005-12-01

    The topology of regulatory networks contains clues to their overall design principles and evolutionary history. We find that while in- and out-degrees of a given protein in the regulatory network are not correlated with each other, there exists a strong negative correlation between the out-degree of a regulatory protein and in-degrees of its targets. Such correlation positions large regulatory modules on the periphery of the network and makes them rather well separated from each other. We also address the question of relative importance of different classes of proteins quantified by the lethality of null-mutants lacking one of them as well as by the level of their evolutionary conservation. It was found that in the yeast regulatory network highly connected proteins are in fact less important than their low-connected counterparts.

  17. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis.

    PubMed

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-08-18

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.

  18. The relationship between dental implant stability and trabecular bone structure using cone-beam computed tomography

    PubMed Central

    2016-01-01

    Purpose The objective of this study was to investigate the relationships between primary implant stability as measured by impact response frequency and the structural parameters of trabecular bone using cone-beam computed tomography(CBCT), excluding the effect of cortical bone thickness. Methods We measured the impact response of a dental implant placed into swine bone specimens composed of only trabecular bone without the cortical bone layer using an inductive sensor. The peak frequency of the impact response spectrum was determined as an implant stability criterion (SPF). The 3D microstructural parameters were calculated from CT images of the bone specimens obtained using both micro-CT and CBCT. Results SPF had significant positive correlations with trabecular bone structural parameters (BV/TV, BV, BS, BSD, Tb.Th, Tb.N, FD, and BS/BV) (P<0.01) while SPF demonstrated significant negative correlations with other microstructural parameters (Tb.Sp, Tb.Pf, and SMI) using micro-CT and CBCT (P<0.01). Conclusions There was an increase in implant stability prediction by combining BV/TV and SMI in the stepwise forward regression analysis. Bone with high volume density and low surface density shows high implant stability. Well-connected thick bone with small marrow spaces also shows high implant stability. The combination of bone density and architectural parameters measured using CBCT can predict the implant stability more accurately than the density alone in clinical diagnoses. PMID:27127692

  19. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  20. Seismic Interferometry at a Large, Dense Array: Capturing the Wavefield at the Source Physics Experiment

    NASA Astrophysics Data System (ADS)

    Matzel, E.; Mellors, R. J.; Magana-Zook, S. A.

    2016-12-01

    Seismic interferometry is based on the observation that the Earth's background wavefield includes coherent energy, which can be recovered by observing over long time periods, allowing the incoherent energy to cancel out. The cross correlation of the energy recorded at a pair of stations results in an estimate of the Green's Function (GF) and is equivalent to the record of a simple source located at one of the stations as recorded by the other. This allows high resolution imagery beneath dense seismic networks even in areas of low seismicity. The power of these inter-station techniques increases rapidly as the number of seismometers in a network increases. For large networks the number of correlations computed can run into the millions and this becomes a "big-data" problem where data-management dominates the efficiency of the computations. In this study, we use several methods of seismic interferometry to obtain highly detailed images at the site of the Source Physics Experiment (SPE). The objective of SPE is to obtain a physics-based understanding of how seismic waves are created at and scattered near the source. In 2015, a temporary deployment of 1,000 closely spaced geophones was added to the main network of instruments at the site. We focus on three interferometric techniques: Shot interferometry (SI) uses the SPE shots as rich sources of high frequency, high signal energy. Coda interferometry (CI) isolates the energy from the scattered wavefield of distant earthquakes. Ambient noise correlation (ANC) uses the energy of the ambient background field. In each case, the data recorded at one seismometer are correlated with the data recorded at another to obtain an estimate of the GF between the two. The large network of mixed geophone and broadband instruments at the SPE allows us to calculate over 500,000 GFs, which we use to characterize the site and measure the localized wavefield. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  1. Intercomparison of MODIS, MISR, OMI, and CALIPSO aerosol optical depth retrievals for four locations on the Indo-Gangetic plains and validation against AERONET data

    NASA Astrophysics Data System (ADS)

    Bibi, Humera; Alam, Khan; Chishtie, Farrukh; Bibi, Samina; Shahid, Imran; Blaschke, Thomas

    2015-06-01

    This study provides an intercomparison of aerosol optical depth (AOD) retrievals from satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS), Multiangle Imaging Spectroradiometer (MISR), Ozone Monitoring Instrument (OMI), and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) instrumentation over Karachi, Lahore, Jaipur, and Kanpur between 2007 and 2013, with validation against AOD observations from the ground-based Aerosol Robotic Network (AERONET). Both MODIS Deep Blue (MODISDB) and MODIS Standard (MODISSTD) products were compared with the AERONET products. The MODISSTD-AERONET comparisons revealed a high degree of correlation for the four investigated sites at Karachi, Lahore, Jaipur, and Kanpur, the MODISDB-AERONET comparisons revealed even better correlations, and the MISR-AERONET comparisons also indicated strong correlations, as did the OMI-AERONET comparisons, while the CALIPSO-AERONET comparisons revealed only poor correlations due to the limited number of data points available. We also computed figures for root mean square error (RMSE), mean absolute error (MAE) and root mean bias (RMB). Using AERONET data to validate MODISSTD, MODISDB, MISR, OMI, and CALIPSO data revealed that MODISSTD data was more accurate over vegetated locations than over un-vegetated locations, while MISR data was more accurate over areas close to the ocean than over other areas. The MISR instrument performed better than the other instruments over Karachi and Kanpur, while the MODISSTD AOD retrievals were better than those from the other instruments over Lahore and Jaipur. We also computed the expected error bounds (EEBs) for both MODIS retrievals and found that MODISSTD consistently outperformed MODISDB in all of the investigated areas. High AOD values were observed by the MODISSTD, MODISDB, MISR, and OMI instruments during the summer months (April-August); these ranged from 0.32 to 0.78, possibly due to human activity and biomass burning. In contrast, high AOD values were observed by the CALIPSO instrument between September and December, due to high concentrations of smoke and soot aerosols. The variable monthly AOD figures obtained with different sensors indicate overestimation by MODISSTD, MODISDB, OMI, and CALIPSO instruments over Karachi, Lahore, Jaipur and Kanpur, relative to the AERONET data, but underestimation by the MISR instrument.

  2. Pathology informatics questions and answers from the University of Pittsburgh pathology residency informatics rotation.

    PubMed

    Harrison, James H

    2004-01-01

    Effective pathology practice increasingly requires familiarity with concepts in medical informatics that may cover a broad range of topics, for example, traditional clinical information systems, desktop and Internet computer applications, and effective protocols for computer security. To address this need, the University of Pittsburgh (Pittsburgh, Pa) includes a full-time, 3-week rotation in pathology informatics as a required component of pathology residency training. To teach pathology residents general informatics concepts important in pathology practice. We assess the efficacy of the rotation in communicating these concepts using a short-answer examination administered at the end of the rotation. Because the increasing use of computers and the Internet in education and general communications prior to residency training has the potential to communicate key concepts that might not need additional coverage in the rotation, we have also evaluated incoming residents' informatics knowledge using a similar pretest. This article lists 128 questions that cover a range of topics in pathology informatics at a level appropriate for residency training. These questions were used for pretests and posttests in the pathology informatics rotation in the Pathology Residency Program at the University of Pittsburgh for the years 2000 through 2002. With slight modification, the questions are organized here into 15 topic categories within pathology informatics. The answers provided are brief and are meant to orient the reader to the question and suggest the level of detail appropriate in an answer from a pathology resident. A previously published evaluation of the test results revealed that pretest scores did not increase during the 3-year evaluation period, and self-assessed computer skill level correlated with pretest scores, but all pretest scores were low. Posttest scores increased substantially, and posttest scores did not correlate with the self-assessed computer skill level recorded at pretest time. Even residents who rated themselves high in computer skills lacked many concepts important in pathology informatics, and posttest scores showed that residents with both high and low self-assessed skill levels learned pathology informatics concepts effectively.

  3. Novel schemes for measurement-based quantum computation.

    PubMed

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  4. Comparison of Methods for Determining Boundary Layer Edge Conditions for Transition Correlations

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.; Berry, Scott A.; Hollis, Brian R.; Horvath, Thomas J.

    2003-01-01

    Data previously obtained for the X-33 in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed to compare methods for determining boundary layer edge conditions for use in transition correlations. The experimental results were previously obtained utilizing the phosphor thermography technique to monitor the status of the boundary layer downstream of discrete roughness elements via global heat transfer images of the X-33 windward surface. A boundary layer transition correlation was previously developed for this data set using boundary layer edge conditions calculated using an inviscid/integral boundary layer approach. An algorithm was written in the present study to extract boundary layer edge quantities from higher fidelity viscous computational fluid dynamic solutions to develop transition correlations that account for viscous effects on vehicles of arbitrary complexity. The boundary layer transition correlation developed for the X-33 from the viscous solutions are compared to the previous boundary layer transition correlations. It is shown that the boundary layer edge conditions calculated using an inviscid/integral boundary layer approach are significantly different than those extracted from viscous computational fluid dynamic solutions. The present results demonstrate the differences obtained in correlating transition data using different computational methods.

  5. Ab initio approaches for the determination of heavy element energetics: Ionization energies of trivalent lanthanides (Ln = La-Eu)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Charles; Penchoff, Deborah A.; Wilson, Angela K., E-mail: wilson@chemistry.msu.edu

    2015-11-21

    An effective approach for the determination of lanthanide energetics, as demonstrated by application to the third ionization energy (in the gas phase) for the first half of the lanthanide series, has been developed. This approach uses a combination of highly correlated and fully relativistic ab initio methods to accurately describe the electronic structure of heavy elements. Both scalar and fully relativistic methods are used to achieve an approach that is both computationally feasible and accurate. The impact of basis set choice and the number of electrons included in the correlation space has also been examined.

  6. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  7. Correlation of heat transfer for the zero pressure gradient hypersonic laminar boundary layer for several gases

    NASA Technical Reports Server (NTRS)

    Cook, W. J.

    1973-01-01

    A theoretical study of heat transfer for zero pressure gradient hypersonic laminar boundary layers for various gases with particular application to the flows produced in an expansion tube facility was conducted. A correlation based on results obtained from solutions to the governing equations for five gases was formulated. Particular attention was directed toward the laminar boundary layer shock tube splitter plates in carbon dioxide flows generated by high speed shock waves. Computer analysis of the splitter plate boundary layer flow provided information that is useful in interpreting experimental data obtained in shock tube gas radiation studies.

  8. Correlation of reaction sites during the chlorine extraction by hydrogen atom from Cl /Si(100)-2×1

    NASA Astrophysics Data System (ADS)

    Hsieh, Ming-Feng; Chung, Jen-Yang; Lin, Deng-Sung; Tsay, Shiow-Fon

    2007-07-01

    The Cl abstraction by gas-phase H atoms from a Cl-terminated Si(100) surface was investigated by scanning tunneling microscopy (STM), high-resolution core level photoemission spectroscopy, and computer simulation. The core level measurements indicate that some additional reactions occur besides the removal of Cl. The STM images show that the Cl-extracted sites disperse randomly in the initial phase of the reaction, but form small clusters as more Cl is removed, indicating a correlation between Cl-extracted sites. These results suggest that the hot-atom process may occur during the atom-adatom collision.

  9. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  10. Computed Tomography Studies of Lung Mechanics

    PubMed Central

    Simon, Brett A.; Christensen, Gary E.; Low, Daniel A.; Reinhardt, Joseph M.

    2005-01-01

    The study of lung mechanics has progressed from global descriptions of lung pressure and volume relationships to the high-resolution, three-dimensional, quantitative measurement of dynamic regional mechanical properties and displacements. X-ray computed tomography (CT) imaging is ideally suited to the study of regional lung mechanics in intact subjects because of its high spatial and temporal resolution, correlation of functional data with anatomic detail, increasing volumetric data acquisition, and the unique relationship between CT density and lung air content. This review presents an overview of CT measurement principles and limitations for the study of regional mechanics, reviews some of the early work that set the stage for modern imaging approaches and impacted the understanding and management of patients with acute lung injury, and presents evolving novel approaches for the analysis and application of dynamic volumetric lung image data. PMID:16352757

  11. Percent Grammatical Responses as a General Outcome Measure: Initial Validity

    ERIC Educational Resources Information Center

    Eisenberg, Sarita L.; Guo, Ling-Yu

    2018-01-01

    Purpose: This report investigated the validity of using percent grammatical responses (PGR) as a measure for assessing grammaticality. To establish construct validity, we computed the correlation of PGR with another measure of grammar skills and with an unrelated skill area. To establish concurrent validity for PGR, we computed the correlation of…

  12. An Exploratory Study of Internet Addiction, Usage and Communication Pleasure.

    ERIC Educational Resources Information Center

    Chou, Chien; Chou, Jung; Tyan, Nay-Ching Nancy

    This study examined the correlation between Internet addiction, usage, and communication pleasure. Research questions were: (1) What is computer network addiction? (2) How can one measure the degree of computer network addiction? (3) What is the correlation between the degree of users' network addiction and their network usage? (4) What is the…

  13. Computational structure analysis of biomacromolecule complexes by interface geometry.

    PubMed

    Mahdavi, Sedigheh; Salehzadeh-Yazdi, Ali; Mohades, Ali; Masoudi-Nejad, Ali

    2013-12-01

    The ability to analyze and compare protein-nucleic acid and protein-protein interaction interface has critical importance in understanding the biological function and essential processes occurring in the cells. Since high-resolution three-dimensional (3D) structures of biomacromolecule complexes are available, computational characterizing of the interface geometry become an important research topic in the field of molecular biology. In this study, the interfaces of a set of 180 protein-nucleic acid and protein-protein complexes are computed to understand the principles of their interactions. The weighted Voronoi diagram of the atoms and the Alpha complex has provided an accurate description of the interface atoms. Our method is implemented in the presence and absence of water molecules. A comparison among the three types of interaction interfaces show that RNA-protein complexes have the largest size of an interface. The results show a high correlation coefficient between our method and the PISA server in the presence and absence of water molecules in the Voronoi model and the traditional model based on solvent accessibility and the high validation parameters in comparison to the classical model. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. GOCO05c: A New Combined Gravity Field Model Based on Full Normal Equations and Regionally Varying Weighting

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2017-05-01

    GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.

  15. Factors influencing exemplary science teachers' levels of computer use

    NASA Astrophysics Data System (ADS)

    Hakverdi, Meral

    This study examines exemplary science teachers' use of technology in science instruction, factors influencing their level of computer use, their level of knowledge/skills in using specific computer applications for science instruction, their use of computer-related applications/tools during their instruction, and their students' use of computer applications/tools in or for their science class. After a relevant review of the literature certain variables were selected for analysis. These variables included personal self-efficacy in teaching with computers, outcome expectancy, pupil-control ideology, level of computer use, age, gender, teaching experience, personal computer use, professional computer use and science teachers' level of knowledge/skills in using specific computer applications for science instruction. The sample for this study includes middle and high school science teachers who received the Presidential Award for Excellence in Science Teaching Award (sponsored by the White House and the National Science Foundation) between the years 1997 and 2003 from all 50 states and U.S. territories. Award-winning science teachers were contacted about the survey via e-mail or letter with an enclosed return envelope. Of the 334 award-winning science teachers, usable responses were received from 92 science teachers, which made a response rate of 27.5%. Analysis of the survey responses indicated that exemplary science teachers have a variety of knowledge/skills in using computer related applications/tools. The most commonly used computer applications/tools are information retrieval via the Internet, presentation tools, online communication, digital cameras, and data collection probes. Results of the study revealed that students' use of technology in their science classroom is highly correlated with the frequency of their science teachers' use of computer applications/tools. The results of the multiple regression analysis revealed that personal self-efficacy related to the exemplary science teachers' level of computer use suggesting that computer use is dependent on perceived abilities at using computers. The teachers' use of computer-related applications/tools during class, and their personal self-efficacy, age, and gender are highly related with their level of knowledge/skills in using specific computer applications for science instruction. The teachers' level of knowledge/skills in using specific computer applications for science instruction and gender related to their use of computer-related applications/tools during class and the students' use of computer-related applications/tools in or for their science class. In conclusion, exemplary science teachers need assistance in learning and using computer-related applications/tool in their science class.

  16. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  17. Interocular symmetry in macular choroidal thickness in children.

    PubMed

    Al-Haddad, Christiane; El Chaar, Lama; Antonios, Rafic; El-Dairi, Mays; Noureddin, Baha'

    2014-01-01

    Objective. To report interocular differences in choroidal thickness in children using spectral domain optical coherence tomography (SD-OCT) and correlate findings with biometric data. Methods. This observational cross-sectional study included 91 (182 eyes) healthy children aged 6 to 17 years with no ocular abnormality except refractive error. After a comprehensive eye exam and axial length measurement, high definition macular scans were performed using SD-OCT. Two observers manually measured the choroidal thickness at the foveal center and at 1500 µm nasally, temporally, inferiorly, and superiorly. Interocular differences were computed; correlations with age, gender, refractive error, and axial length were performed. Results. Mean age was 10.40 ± 3.17 years; mean axial length and refractive error values were similar between fellow eyes. There was excellent correlation between the two observers' measurements. No significant interocular differences were observed at any location. There was only a trend for right eyes to have higher values in all thicknesses, except the superior thickness. Most of the choroidal thickness measurements correlated positively with spherical equivalent but not with axial length, age, or gender. Conclusion. Choroidal thickness measurements in children as performed using SD-OCT revealed a high level of interobserver agreement and consistent interocular symmetry. Values correlated positively with spherical equivalent refraction.

  18. Criterion for Identifying Vortices in High-Pressure Flows

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Okong'o, Nora

    2007-01-01

    A study of four previously published computational criteria for identifying vortices in high-pressure flows has led to the selection of one of them as the best. This development can be expected to contribute to understanding of high-pressure flows, which occur in diverse settings, including diesel, gas turbine, and rocket engines and the atmospheres of Jupiter and other large gaseous planets. Information on the atmospheres of gaseous planets consists mainly of visual and thermal images of the flows over the planets. Also, validation of recently proposed computational models of high-pressure flows entails comparison with measurements, which are mainly of visual nature. Heretofore, the interpretation of images of high-pressure flows to identify vortices has been based on experience with low-pressure flows. However, high-pressure flows have features distinct from those of low-pressure flows, particularly in regions of high pressure gradient magnitude caused by dynamic turbulent effects and by thermodynamic mixing of chemical species. Therefore, interpretations based on low-pressure behavior may lead to misidentification of vortices and other flow structures in high-pressure flows. The study reported here was performed in recognition of the need for one or more quantitative criteria for identifying coherent flow structures - especially vortices - from previously generated flow-field data, to complement or supersede the determination of flow structures by visual inspection of instantaneous fields or flow animations. The focus in the study was on correlating visible images of flow features with various quantities computed from flow-field data.

  19. Optimising the measurement of bruises in children across conventional and cross polarized images using segmentation analysis techniques in Image J, Photoshop and circle diameter measurements.

    PubMed

    Harris, C; Alcock, A; Trefan, L; Nuttall, D; Evans, S T; Maguire, S; Kemp, A M

    2018-02-01

    Bruising is a common abusive injury in children, and it is standard practice to image and measure them, yet there is no current standard for measuring bruise size consistently. We aim to identify the optimal method of measuring photographic images of bruises, including computerised measurement techniques. 24 children aged <11 years (mean age of 6.9, range 2.5-10 years) with a bruise were recruited from the community. Demographics and bruise details were recorded. Each bruise was measured in vivo using a paper measuring tape. Standardised conventional and cross polarized digital images were obtained. The diameter of bruise images were measured by three computer aided measurement techniques: Image J (segmentation with Simple Interactive Object Extraction (maximum Feret diameter), 'Circular Selection Tool' (Circle diameter), & the Photoshop 'ruler' software (Photoshop diameter)). Inter and intra-observer effects were determined by two individuals repeating 11 electronic measurements, and relevant Intraclass Correlation Coefficient's (ICC's) were used to establish reliability. Spearman's rank correlation was used to compare in vivo with computerised measurements; a comparison of measurement techniques across imaging modalities was conducted using Kolmogorov-Smirnov tests. Significance was set at p < 0.05 for all tests. Images were available for 38 bruises in vivo, with 48 bruises visible on cross polarized imaging and 46 on conventional imaging (some bruises interpreted as being single in vivo appeared to be multiple in digital images). Correlation coefficients were >0.5 for all techniques, with maximum Feret diameter and maximum Photoshop diameter on conventional images having the strongest correlation with in vivo measurements. There were significant differences between in vivo and computer-aided measurements, but none between different computer-aided measurement techniques. Overall, computer aided measurements appeared larger than in vivo. Inter- and intra-observer agreement was high for all maximum diameter measurements (ICC's > 0.7). Whilst there are minimal differences between measurements of images obtained, the most consistent results were obtained when conventional images, segmented by Image J Software, were measured with a Feret diameter. This is therefore proposed as a standard for future research, and forensic practice, with the proviso that all computer aided measurements appear larger than in vivo. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  1. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  2. Towards a computational(ist) neurobiology of language: Correlational, integrated, and explanatory neurolinguistics*

    PubMed Central

    Poeppel, David

    2014-01-01

    We outline what an integrated approach to language research that connects experimental, theoretical, and neurobiological domains of inquiry would look like, and ask to what extent unification is possible across domains. At the center of the program is the idea that computational/representational (CR) theories of language must be used to investigate its neurobiological (NB) foundations. We consider different ways in which CR and NB might be connected. These are (1) A Correlational way, in which NB computation is correlated with the CR theory; (2) An Integrated way, in which NB data provide crucial evidence for choosing among CR theories; and (3) an Explanatory way, in which properties of NB explain why a CR theory is the way it is. We examine various questions concerning the prospects for Explanatory connections in particular, including to what extent it makes sense to say that NB could be specialized for particular computations. PMID:25914888

  3. Towards a computational(ist) neurobiology of language: Correlational, integrated, and explanatory neurolinguistics.

    PubMed

    Embick, David; Poeppel, David

    2015-05-01

    We outline what an integrated approach to language research that connects experimental, theoretical, and neurobiological domains of inquiry would look like, and ask to what extent unification is possible across domains. At the center of the program is the idea that computational/representational (CR) theories of language must be used to investigate its neurobiological (NB) foundations. We consider different ways in which CR and NB might be connected. These are (1) A Correlational way, in which NB computation is correlated with the CR theory; (2) An Integrated way, in which NB data provide crucial evidence for choosing among CR theories; and (3) an Explanatory way, in which properties of NB explain why a CR theory is the way it is. We examine various questions concerning the prospects for Explanatory connections in particular, including to what extent it makes sense to say that NB could be specialized for particular computations.

  4. Quantitative estimation of infarct size by simultaneous dual radionuclide single photon emission computed tomography: comparison with peak serum creatine kinase activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, K.; Sone, T.; Tsuboi, H.

    1991-05-01

    To test the hypothesis that simultaneous dual energy single photon emission computed tomography (SPECT) with technetium-99m (99mTc) pyrophosphate and thallium-201 (201TI) can provide an accurate estimate of the size of myocardial infarction and to assess the correlation between infarct size and peak serum creatine kinase activity, 165 patients with acute myocardial infarction underwent SPECT 3.2 +/- 1.3 (SD) days after the onset of acute myocardial infarction. In the present study, the difference in the intensity of 99mTc-pyrophosphate accumulation was assumed to be attributable to difference in the volume of infarcted myocardium, and the infarct volume was corrected by the ratiomore » of the myocardial activity to the osseous activity to quantify the intensity of 99mTc-pyrophosphate accumulation. The correlation of measured infarct volume with peak serum creatine kinase activity was significant (r = 0.60, p less than 0.01). There was also a significant linear correlation between the corrected infarct volume and peak serum creatine kinase activity (r = 0.71, p less than 0.01). Subgroup analysis showed a high correlation between corrected volume and peak creatine kinase activity in patients with anterior infarctions (r = 0.75, p less than 0.01) but a poor correlation in patients with inferior or posterior infarctions (r = 0.50, p less than 0.01). In both the early reperfusion and the no reperfusion groups, a good correlation was found between corrected infarct volume and peak serum creatine kinase activity (r = 0.76 and r = 0.76, respectively; p less than 0.01).« less

  5. Spine Trabecular Bone Score as an Indicator of Bone Microarchitecture at the Peripheral Skeleton in Kidney Transplant Recipients.

    PubMed

    Luckman, Matthew; Hans, Didier; Cortez, Natalia; Nishiyama, Kyle K; Agarawal, Sanchita; Zhang, Chengchen; Nikkel, Lucas; Iyer, Sapna; Fusaro, Maria; Guo, Edward X; McMahon, Donald J; Shane, Elizabeth; Nickolas, Thomas L

    2017-04-03

    Studies using high-resolution peripheral quantitative computed tomography showed progressive abnormalities in cortical and trabecular microarchitecture and biomechanical competence over the first year after kidney transplantation. However, high-resolution peripheral computed tomography is a research tool lacking wide availability. In contrast, the trabecular bone score is a novel and widely available tool that uses gray-scale variograms of the spine image from dual-energy x-ray absorptiometry to assess trabecular quality. There are no studies assessing whether trabecular bone score characterizes bone quality in kidney transplant recipients. Between 2009 and 2010, we conducted a study to assess changes in peripheral skeletal microarchitecture, measured by high-resolution peripheral computed tomography, during the first year after transplantation in 47 patients managed with early corticosteroid-withdrawal immunosuppression. All adult first-time transplant candidates were eligible. Patients underwent imaging with high-resolution peripheral computed tomography and dual-energy x-ray absorptiometry pretransplantation and 3, 6, and 12 months post-transplantation. We now test if, during the first year after transplantation, trabecular bone score assesses the evolution of bone microarchitecture and biomechanical competence as determined by high-resolution peripheral computed tomography. At baseline and follow-up, among the 72% and 78%, respectively, of patients having normal bone mineral density by dual-energy x-ray absorptiometry, 53% and 50%, respectively, were classified by trabecular bone score as having high fracture risk. At baseline, trabecular bone score correlated with spine, hip, and ultradistal radius bone mineral density by dual-energy x-ray absorptiometry and cortical area, density, thickness, and porosity; trabecular density, thickness, separation, and heterogeneity; and stiffness and failure load by high-resolution peripheral computed tomography. Longitudinally, each percentage increase in trabecular bone score was associated with increases in trabecular number (0.35%±1.4%); decreases in trabecular thickness (-0.45%±0.15%), separation (-0.40%±0.15%), and network heterogeneity (-0.48%±0.20%); and increases in failure load (0.22%±0.09%) by high-resolution peripheral computed tomography (all P <0.05). Trabecular bone score may be a useful method to assess and monitor bone quality and strength and classify fracture risk in kidney transplant recipients. Copyright © 2017 by the American Society of Nephrology.

  6. Spine Trabecular Bone Score as an Indicator of Bone Microarchitecture at the Peripheral Skeleton in Kidney Transplant Recipients

    PubMed Central

    Luckman, Matthew; Hans, Didier; Cortez, Natalia; Nishiyama, Kyle K.; Agarawal, Sanchita; Zhang, Chengchen; Nikkel, Lucas; Iyer, Sapna; Fusaro, Maria; Guo, Edward X.; McMahon, Donald J.; Shane, Elizabeth

    2017-01-01

    Background and objectives Studies using high-resolution peripheral quantitative computed tomography showed progressive abnormalities in cortical and trabecular microarchitecture and biomechanical competence over the first year after kidney transplantation. However, high-resolution peripheral computed tomography is a research tool lacking wide availability. In contrast, the trabecular bone score is a novel and widely available tool that uses gray-scale variograms of the spine image from dual-energy x-ray absorptiometry to assess trabecular quality. There are no studies assessing whether trabecular bone score characterizes bone quality in kidney transplant recipients. Design, settings, participants, & measurements Between 2009 and 2010, we conducted a study to assess changes in peripheral skeletal microarchitecture, measured by high-resolution peripheral computed tomography, during the first year after transplantation in 47 patients managed with early corticosteroid–withdrawal immunosuppression. All adult first-time transplant candidates were eligible. Patients underwent imaging with high-resolution peripheral computed tomography and dual-energy x-ray absorptiometry pretransplantation and 3, 6, and 12 months post-transplantation. We now test if, during the first year after transplantation, trabecular bone score assesses the evolution of bone microarchitecture and biomechanical competence as determined by high-resolution peripheral computed tomography. Results At baseline and follow-up, among the 72% and 78%, respectively, of patients having normal bone mineral density by dual-energy x-ray absorptiometry, 53% and 50%, respectively, were classified by trabecular bone score as having high fracture risk. At baseline, trabecular bone score correlated with spine, hip, and ultradistal radius bone mineral density by dual-energy x-ray absorptiometry and cortical area, density, thickness, and porosity; trabecular density, thickness, separation, and heterogeneity; and stiffness and failure load by high-resolution peripheral computed tomography. Longitudinally, each percentage increase in trabecular bone score was associated with increases in trabecular number (0.35%±1.4%); decreases in trabecular thickness (−0.45%±0.15%), separation (−0.40%±0.15%), and network heterogeneity (−0.48%±0.20%); and increases in failure load (0.22%±0.09%) by high-resolution peripheral computed tomography (all P<0.05). Conclusions Trabecular bone score may be a useful method to assess and monitor bone quality and strength and classify fracture risk in kidney transplant recipients. PMID:28348031

  7. Short-range density functional correlation within the restricted active space CI method

    NASA Astrophysics Data System (ADS)

    Casanova, David

    2018-03-01

    In the present work, I introduce a hybrid wave function-density functional theory electronic structure method based on the range separation of the electron-electron Coulomb operator in order to recover dynamic electron correlations missed in the restricted active space configuration interaction (RASCI) methodology. The working equations and the computational algorithm for the implementation of the new approach, i.e., RAS-srDFT, are presented, and the method is tested in the calculation of excitation energies of organic molecules. The good performance of the RASCI wave function in combination with different short-range exchange-correlation functionals in the computation of relative energies represents a quantitative improvement with respect to the RASCI results and paves the path for the development of RAS-srDFT as a promising scheme in the computation of the ground and excited states where nondynamic and dynamic electron correlations are important.

  8. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.

    PubMed

    Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu

    2017-02-26

    In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.

  9. Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle

    PubMed Central

    Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu

    2017-01-01

    In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634

  10. Correlative factors for the location of tracheobronchial foreign bodies in infants and children.

    PubMed

    Xu, Ying; Feng, Rui-Ling; Jiang, Lan; Ren, Hong-Bo; Li, Qi

    2018-02-01

    This study aims to analyze factors related to the location of tracheobronchial foreign bodies in infants and children, and provide help in the assessment of the disease, surgical risk and prognosis. The clinical data of 1,060 pediatric patients with tracheobronchial foreign bodies diagnosed from January 2015 to December 2015 were retrospectively studied, the association of the location of the foreign bodies with age, gender, granulation formation, chest computed tomography and 3D reconstruction results, preoperative complications, operation time, and hospital stay was analyzed. The location of foreign bodies was not correlated with age, gender, operation time and length of hospital stay, but was correlated to granulation formation, chest computed tomography and 3D reconstruction results, and preoperative complications. The location of foreign bodies was correlated to granulation formation, the location of foreign bodies displayed by chest computed tomography, and preoperative complications.

  11. Empirical source strength correlations for rans-based acoustic analogy methods

    NASA Astrophysics Data System (ADS)

    Kube-McDowell, Matthew Tyndall

    JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.

  12. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    NASA Astrophysics Data System (ADS)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  13. Petascale Many Body Methods for Complex Correlated Systems

    NASA Astrophysics Data System (ADS)

    Pruschke, Thomas

    2012-02-01

    Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.

  14. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time

    PubMed Central

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-01-01

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90–94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7–5.6% per millisecond, with most satellites acquired successfully. PMID:29495301

  15. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time.

    PubMed

    Zhang, Yeqing; Wang, Meiling; Li, Yafeng

    2018-02-24

    For the objective of essentially decreasing computational complexity and time consumption of signal acquisition, this paper explores a resampling strategy and variable circular correlation time strategy specific to broadband multi-frequency GNSS receivers. In broadband GNSS receivers, the resampling strategy is established to work on conventional acquisition algorithms by resampling the main lobe of received broadband signals with a much lower frequency. Variable circular correlation time is designed to adapt to different signal strength conditions and thereby increase the operation flexibility of GNSS signal acquisition. The acquisition threshold is defined as the ratio of the highest and second highest correlation results in the search space of carrier frequency and code phase. Moreover, computational complexity of signal acquisition is formulated by amounts of multiplication and summation operations in the acquisition process. Comparative experiments and performance analysis are conducted on four sets of real GPS L2C signals with different sampling frequencies. The results indicate that the resampling strategy can effectively decrease computation and time cost by nearly 90-94% with just slight loss of acquisition sensitivity. With circular correlation time varying from 10 ms to 20 ms, the time cost of signal acquisition has increased by about 2.7-5.6% per millisecond, with most satellites acquired successfully.

  16. Correlated wave functions for three-particle systems with Coulomb interaction - The muonic helium atom

    NASA Technical Reports Server (NTRS)

    Huang, K.-N.

    1977-01-01

    A computational procedure for calculating correlated wave functions is proposed for three-particle systems interacting through Coulomb forces. Calculations are carried out for the muonic helium atom. Variational wave functions which explicitly contain interparticle coordinates are presented for the ground and excited states. General Hylleraas-type trial functions are used as the basis for the correlated wave functions. Excited-state energies of the muonic helium atom computed from 1- and 35-term wave functions are listed for four states.

  17. Combining multinuclear high-resolution solid-state MAS NMR and computational methods for resonance assignment of glutathione tripeptide.

    PubMed

    Sardo, Mariana; Siegel, Renée; Santos, Sérgio M; Rocha, João; Gomes, José R B; Mafra, Luis

    2012-06-28

    We present a complete set of experimental approaches for the NMR assignment of powdered tripeptide glutathione at natural isotopic abundance, based on J-coupling and dipolar NMR techniques combined with (1)H CRAMPS decoupling. To fully assign the spectra, two-dimensional (2D) high-resolution methods, such as (1)H-(13)C INEPT-HSQC/PRESTO heteronuclear correlations (HETCOR), (1)H-(1)H double-quantum (DQ), and (1)H-(14)N D-HMQC correlation experiments, have been used. To support the interpretation of the experimental data, periodic density functional theory calculations together with the GIPAW approach have been used to calculate the (1)H and (13)C chemical shifts. It is found that the shifts calculated with two popular plane wave codes (CASTEP and Quantum ESPRESSO) are in excellent agreement with the experimental results.

  18. A reduction in both visceral and subcutaneous fats contributes to increased adiponectin by lifestyle intervention in the Diabetes Prevention Program.

    PubMed

    Zhang, Chao; Luo, Hao; Gao, Feng; Zhang, Chun-Ting; Zhang, Ren

    2015-06-01

    Adiponectin, an insulin-sensitizing adipokine, confers protection against type 2 diabetes. Although adiponectin is secreted exclusively from fat, contributions of visceral adipose tissue (VAT) versus subcutaneous adipose tissue (SAT) to adiponectin levels have not been fully understood. We aimed to examine correlations between changes in VAT and SAT volumes and changes in adiponectin levels. Here, we have investigated the correlations between ΔVAT and ΔSAT with Δadiponectin in participants of the Diabetes Prevention Program, a clinical trial investigating the effects of lifestyle changes and metformin versus placebo on the rate of developing type 2 diabetes. Data on VAT and SAT volumes, measured by computed tomography, and on adiponectin levels at both baseline and 1-year follow-up were available in 321 men and 626 women. In men, Δadiponectin was highly significantly correlated with both ΔSAT (r s  = -0.329) and ΔVAT (r s  = -0.413). Likewise, in women, Δadiponectin was correlated with both ΔSAT (r s  = -0.294) and ΔVAT (r s  = -0.348). In the lifestyle arm, Δadiponectin remained highly significantly correlated with ΔSAT and ΔVAT in men (r s  = -0.399 and r s  = -0.460, respectively), and in women (r s  = -0.372 and r s  = -0.396, respectively), with P < 0.001 for all above correlations. We conclude that for both men and women, adiponectin changes are highly significantly correlated with changes in both SAT and VAT and that exercise- and weight-loss-induced reduction in both SAT and VAT contributes to the increased adiponectin.

  19. Quantum Monte Carlo Endstation for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlomore » code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.« less

  20. Absolute Helmholtz free energy of highly anharmonic crystals: theory vs Monte Carlo.

    PubMed

    Yakub, Lydia; Yakub, Eugene

    2012-04-14

    We discuss the problem of the quantitative theoretical prediction of the absolute free energy for classical highly anharmonic solids. Helmholtz free energy of the Lennard-Jones (LJ) crystal is calculated accurately while accounting for both the anharmonicity of atomic vibrations and the pair and triple correlations in displacements of the atoms from their lattice sites. The comparison with most precise computer simulation data on sublimation and melting lines revealed that theoretical predictions are in excellent agreement with Monte Carlo simulation data in the whole range of temperatures and densities studied.

Top