Science.gov

Sample records for computing non-parametric function

  1. Parametric and Non-parametric methods for the periodogram analysis: Interrelations and properties of the test functions

    NASA Astrophysics Data System (ADS)

    Andronov, I. L.; Chinarova, L. L.

    Numerical comparison of the methods for periodogram analysis is carried out for the parametric modifications of the Fourier transform by Deeming T.J. (1975, Ap. Space Sci., 36, 137); Lomb N.R. (1976, Ap. Space Sci., 39, 447); Andronov I.L. (1994, Odessa Astron. Publ., 7, 49); parametric modifications based on the spline approximations of different order k and defect k by Jurkevich I. (1971, Ap. Space Sci., 13, 154; n = 0, k = 1); Marraco H.G., Muzzio J.C. (1980, P.A.S.P., 92, 700; n = 1, k = 2); Andronov I.L. (1987, Contrib. Astron. Inst. Czechoslovak. 20, 161; n = 3, k = 1); non-parametric modifications by Lafler J. and Kinman T.D. (1965, Ap.J.Suppl., 11, 216), Burke E.W., Rolland W.W. and Boy W.R. (1970, J.R.A.S.Canada, 64, 353), Deeming T.J. (1970, M.N.R.A.S., 147, 365), Renson P. (1978, As. Ap., 63, 125) and Dworetsky M.M. (1983, M.N.R.A.S., 203, 917). For some numerical models the values of the mean, variance, asymmetry and excess of the test-functions are determined, the correlations between them are discussed. Analytic estimates of the mathematical expectation of the test function for different methods and of the dispersion of the test function by Lafler and Kinman (1965) and of the parametric functions are determined. The statistical distribution of the test functions computed for fixed data and various frequencies is significantly different from that computed for various data realizations. The histogram for the non-parametric test functions is nearly symmetric for normally distributed uncorrelated data and is characterized by a distinctly negative asymmetry for noisy data with periodic components. The non-parametric test-functions may be subdivided into two groups - similar to that by Lafler and Kinman (1965) and to that by Deeming (1970). The correlation coefficients for the test-functions within each group are close to unity for large number of data. Conditions for significant influence of the phase difference between the data onto the test functions are

  2. Non-parametric frequency response function tissue modeling in bipolar electrosurgery.

    PubMed

    Barbé, Kurt; Ford, Carolyn; Bonn, Kenlyn; Gilbert, James

    2015-01-01

    High-frequency radio energy is applied to tissue therapeutically in a number of different medical applications. The ability to model the effects of RF energy on the collagen, elastin, and liquid content of the target tissue would allow for the refinement of the control of the energy in order to improve outcomes and reduce negative side-effects. In this paper, we study the time-varying impedance spectra of the circuit. It is expected that the collagen/elastin ratio does not change over time such that the time-varying impedance is a function of the liquid content. We apply a non-parametric model in which we characterize the measured impedance spectra by its frequency response function. The measurements indicate that the changing impedance as a function of time exhibit a polynomial shift which we characterize by a polynomial regression. Finally, we quantify the uncertainty to obtain prediction intervals for the estimated polynomial describing the time variation of the impedance spectra. PMID:26737664

  3. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    PubMed

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  4. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    PubMed

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  5. Scaling of preferential flow in biopores by parametric or non parametric transfer functions

    NASA Astrophysics Data System (ADS)

    Zehe, E.; Hartmann, N.; Klaus, J.; Palm, J.; Schroeder, B.

    2009-04-01

    finally assign the measured hydraulic capacities to these pores. By combining this population of macropores with observed data on soil hydraulic properties we obtain a virtual reality. Flow and transport is simulated for different rainfall forcings comparing two models, Hydrus 3d and Catflow. The simulated cumulative travel depths distributions for different forcings will be linked to the cumulative depth distribution of connected flow paths. The latter describes the fraction of connected paths - where flow resistance is always below a selected threshold that links the surface to a certain critical depth. Systematic variation of the average number of macropores and their depth distributions will show whether a clear link between the simulated travel depths distributions and the depth distribution of connected paths may be identified. The third essential step is to derive a non parametric transfer function that predicts travel depth distributions of tracers and on the long term pesticides based on easy-to-assess subsurface characteristics (mainly density and depth distribution of worm burrows, soil matrix properties), initial conditions and rainfall forcing. Such a transfer function is independent of scale ? as long as we stay in the same ensemble i.e. worm population and soil properties stay the same. Shipitalo, M.J. and Butt, K.R. (1999): Occupancy and geometrical properties of Lumbricus terrestris L. burrows affecting infiltration. Pedobiologia 43:782-794 Zehe E, and Fluehler H. (2001b): Slope scale distribution of flow patterns in soil profiles. J. Hydrol. 247: 116-132.

  6. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.

    PubMed

    Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego

    2015-10-01

    Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can

  7. Super-resolution non-parametric deconvolution in modelling the radial response function of a parallel plate ionization chamber.

    PubMed

    Kulmala, A; Tenhunen, M

    2012-11-01

    The signal of the dosimetric detector is generally dependent on the shape and size of the sensitive volume of the detector. In order to optimize the performance of the detector and reliability of the output signal the effect of the detector size should be corrected or, at least, taken into account. The response of the detector can be modelled using the convolution theorem that connects the system input (actual dose), output (measured result) and the effect of the detector (response function) by a linear convolution operator. We have developed the super-resolution and non-parametric deconvolution method for determination of the cylinder symmetric ionization chamber radial response function. We have demonstrated that the presented deconvolution method is able to determine the radial response for the Roos parallel plate ionization chamber with a better than 0.5 mm correspondence with the physical measures of the chamber. In addition, the performance of the method was proved by the excellent agreement between the output factors of the stereotactic conical collimators (4-20 mm diameter) measured by the Roos chamber, where the detector size is larger than the measured field, and the reference detector (diode). The presented deconvolution method has a potential in providing reference data for more accurate physical models of the ionization chamber as well as for improving and enhancing the performance of the detectors in specific dosimetric problems.

  8. Density estimation with non-parametric methods

    NASA Astrophysics Data System (ADS)

    Fadda, D.; Slezak, E.; Bijaoui, A.

    1998-01-01

    One key issue in several astrophysical problems is the evaluation of the density probability function underlying an observational discrete data set. We here review two non-parametric density estimators which recently appeared in the astrophysical literature, namely the adaptive kernel density estimator and the Maximum Penalized Likelihood technique, and describe another method based on the wavelet transform. The efficiency of these estimators is tested by using extensive numerical simulations in the one-dimensional case. The results are in good agreement with theoretical functions and the three methods appear to yield consistent estimates. However, the Maximum Penalized Likelihood suffers from a lack of resolution and high computational cost due to its dependency on a minimization algorithm. The small differences between kernel and wavelet estimates are mainly explained by the ability of the wavelet method to take into account local gaps in the data distribution. This new approach is very promising, since smaller structures superimposed onto a larger one are detected only by this technique, especially when small samples are investigated. Thus, wavelet solutions appear to be better suited for subclustering studies. Nevertheless, kernel estimates seem more robust and are reliable solutions although some small-scale details can be missed. In order to check these estimators with respect to previous studies, two galaxy redshift samples, related to the galaxy cluster A3526 and to the Corona Borealis region, have been analyzed. In both these cases claims for bimodality are confirmed at a high confidence level. The complete version of this paper with the whole set of figures can be accessed from the electronic version of the A\\&A Suppl. Ser. managed by Editions de Physique as well as from the SISSA database (astro-ph/9704096).

  9. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  10. Identification of physiological systems: a robust method for non-parametric impulse response estimation.

    PubMed

    Westwick, D T; Kearney, R E

    1997-03-01

    The identification of non-parametric impulse response functions (IRFs) from noisy finite-length data records is analysed using the techniques of matrix perturbation theory. Based on these findings, a method for IRF estimation is developed that is more robust than existing techniques, particularly when the input is non-white. Furthermore, methods are developed for computing confidence bounds on the resulting IRF estimates. Monte Carlo simulations are used to assess the capabilities of this new method and to demonstrate its superiority over classical techniques. An application to the identification of dynamic ankle stiffness in humans is presented. PMID:9136198

  11. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  12. Lottery spending: a non-parametric analysis.

    PubMed

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.

  13. Lottery Spending: A Non-Parametric Analysis

    PubMed Central

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  14. Non-parametric transformation for data correlation and integration: From theory to practice

    SciTech Connect

    Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon

    1997-08-01

    The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.

  15. Subspace based non-parametric approach for hyperspectral anomaly detection in complex scenarios

    NASA Astrophysics Data System (ADS)

    Matteoli, Stefania; Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2014-10-01

    Recent studies on global anomaly detection (AD) in hyperspectral images have focused on non-parametric approaches that seem particularly suitable to detect anomalies in complex backgrounds without the need of assuming any specific model for the background distribution. Among these, AD algorithms based on the kernel density estimator (KDE) benefit from the flexibility provided by KDE, which attempts to estimate the background probability density function (PDF) regardless of its specific form. The high computational burden associated with KDE requires KDE-based AD algorithms be preceded by a suitable dimensionality reduction (DR) procedure aimed at identifying the subspace where most of the useful signal lies. In most cases, this may lead to a degradation of the detection performance due to the leakage of some anomalous target components to the subspace orthogonal to the one identified by the DR procedure. This work presents a novel subspace-based AD strategy that combines the use of KDE with a simple parametric detector performed on the orthogonal complement of the signal subspace, in order to benefit of the non-parametric nature of KDE and, at the same time, avoid the performance loss that may occur due to the DR procedure. Experimental results indicate that the proposed AD strategy is promising and deserves further investigation.

  16. 'nparACT' package for R: A free software tool for the non-parametric analysis of actigraphy data.

    PubMed

    Blume, Christine; Santhi, Nayantara; Schabus, Manuel

    2016-01-01

    For many studies, participants' sleep-wake patterns are monitored and recorded prior to, during and following an experimental or clinical intervention using actigraphy, i.e. the recording of data generated by movements. Often, these data are merely inspected visually without computation of descriptive parameters, in part due to the lack of user-friendly software. To address this deficit, we developed a package for R Core Team [6], that allows computing several non-parametric measures from actigraphy data. Specifically, it computes the interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) of activity and gives the start times and average activity values of M10 (i.e. the ten hours with maximal activity) and L5 (i.e. the five hours with least activity). Two functions compute these 'classical' parameters and handle either single or multiple files. Two other functions additionally allow computing an L-value (i.e. the least activity value) for a user-defined time span termed 'Lflex' value. A plotting option is included in all functions. The package can be downloaded from the Comprehensive R Archives Network (CRAN). •The package 'nparACT' for R serves the non-parametric analysis of actigraphy data.•Computed parameters include interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) as well as start times and average activity during the 10 h with maximal and the 5 h with minimal activity (i.e. M10 and L5).

  17. Non-parametric partitioning of SAR images

    NASA Astrophysics Data System (ADS)

    Delyon, G.; Galland, F.; Réfrégier, Ph.

    2006-09-01

    We describe and analyse a generalization of a parametric segmentation technique adapted to Gamma distributed SAR images to a simple non parametric noise model. The partition is obtained by minimizing the stochastic complexity of a quantized version on Q levels of the SAR image and lead to a criterion without parameters to be tuned by the user. We analyse the reliability of the proposed approach on synthetic images. The quality of the obtained partition will be studied for different possible strategies. In particular, one will discuss the reliability of the proposed optimization procedure. Finally, we will precisely study the performance of the proposed approach in comparison with the statistical parametric technique adapted to Gamma noise. These studies will be led by analyzing the number of misclassified pixels, the standard Hausdorff distance and the number of estimated regions.

  18. Non-parametric estimation of morphological lopsidedness

    NASA Astrophysics Data System (ADS)

    Giese, Nadine; van der Hulst, Thijs; Serra, Paolo; Oosterloo, Tom

    2016-09-01

    Asymmetries in the neutral hydrogen gas distribution and kinematics of galaxies are thought to be indicators for both gas accretion and gas removal processes. These are of fundamental importance for galaxy formation and evolution. Upcoming large blind H I surveys will provide tens of thousands of galaxies for a study of these asymmetries in a proper statistical way. Due to the large number of expected sources and the limited resolution of the majority of objects, detailed modelling is not feasible for most detections. We need fast, automatic and sensitive methods to classify these objects in an objective way. Existing non-parametric methods suffer from effects like the dependence on signal to noise, resolution and inclination. Here we show how to correctly take these effects into account and show ways to estimate the precision of the methods. We will use existing and modelled data to give an outlook on the performance expected for galaxies observed in the various sky surveys planned for e.g. WSRT/APERTIF and ASKAP.

  19. Combining parametric, semi-parametric, and non-parametric survival models with stacked survival models.

    PubMed

    Wey, Andrew; Connett, John; Rudser, Kyle

    2015-07-01

    For estimating conditional survival functions, non-parametric estimators can be preferred to parametric and semi-parametric estimators due to relaxed assumptions that enable robust estimation. Yet, even when misspecified, parametric and semi-parametric estimators can possess better operating characteristics in small sample sizes due to smaller variance than non-parametric estimators. Fundamentally, this is a bias-variance trade-off situation in that the sample size is not large enough to take advantage of the low bias of non-parametric estimation. Stacked survival models estimate an optimally weighted combination of models that can span parametric, semi-parametric, and non-parametric models by minimizing prediction error. An extensive simulation study demonstrates that stacked survival models consistently perform well across a wide range of scenarios by adaptively balancing the strengths and weaknesses of individual candidate survival models. In addition, stacked survival models perform as well as or better than the model selected through cross-validation. Finally, stacked survival models are applied to a well-known German breast cancer study.

  20. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians. PMID:19041946

  1. Non-parametric extraction of implied asset price distributions

    NASA Astrophysics Data System (ADS)

    Healy, Jerome V.; Dixon, Maurice; Read, Brian J.; Cai, Fang Fang

    2007-08-01

    We present a fully non-parametric method for extracting risk neutral densities (RNDs) from observed option prices. The aim is to obtain a continuous, smooth, monotonic, and convex pricing function that is twice differentiable. Thus, irregularities such as negative probabilities that afflict many existing RND estimation techniques are reduced. Our method employs neural networks to obtain a smoothed pricing function, and a central finite difference approximation to the second derivative to extract the required gradients. This novel technique was successfully applied to a large set of FTSE 100 daily European exercise (ESX) put options data and as an Ansatz to the corresponding set of American exercise (SEI) put options. The results of paired t-tests showed significant differences between RNDs extracted from ESX and SEI option data, reflecting the distorting impact of early exercise possibility for the latter. In particular, the results for skewness and kurtosis suggested different shapes for the RNDs implied by the two types of put options. However, both ESX and SEI data gave an unbiased estimate of the realised FTSE 100 closing prices on the options’ expiration date. We confirmed that estimates of volatility from the RNDs of both types of option were biased estimates of the realised volatility at expiration, but less so than the LIFFE tabulated at-the-money implied volatility.

  2. Doping:. a New Non-Parametric Deprojection Scheme

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Dalia; Ferrarese, Laura

    We present a new non-parametric deprojection algorithm, DOPING (Deprojection of Observed Photometry using an INverse Gambit), which is designed to extract the three-dimensional luminosity density distribution ρ, from the observed surface brightness profile of an astrophysical system such as a galaxy or a galaxy cluster, in a generalised geometry, while taking into account changes in the intrinsic shape of the system. The observable is the 2D surface brightness distribution of the system. While the deprojection schemes presented hitherto have always worked within the limits of an assumed intrinsic geometry, in DOPING, geometry and inclination can be provided as inputs. The ρ that is most likely to project to the observed brightness data is sought; the maximisation of the likelihood is performed with the Metropolis algorithm. Unless the likelihood function is maximised, ρ is tweaked in shape and amplitude, while maintaining positivity, but otherwise the luminosity distribution is allowed to be completely free-form. Tests and applications of the algorithm are discussed.

  3. Doping:. a New Non-Parametric Deprojection Scheme

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Dalia; Ferrarese, Laura

    2007-12-01

    We present a new non-parametric deprojection algorithm DOPING (Deprojection of Observed Photometry using and INverse Gambit), that is designed to extract the three dimensional luminosity density distribution ρ, from the observed surface brightness profile of an astrophysical system such as a galaxy or a galaxy cluster, in a generalised geometry, while taking into account changes in the intrinsic shape of the system. The observable is the 2-D surface brightness distribution of the system. While the deprojection schemes presented hitherto have always worked within the limits of an assumed intrinsic geometry, in DOPING, geometry and inclination can be provided as inputs. The ρ that is most likely to project to the observed brightness data is sought; the maximisation of the likelihood is performed with the Metropolis algorithm. Unless the likelihood function is maximised, ρ is tweaked in shape and amplitude, while maintaining monotonicity and positivity, but otherwise the luminosity distribution is allowed to be completely free-form. Tests and applications of the algorithm are discussed.

  4. Probabilistic streamflow forecasting for hydroelectricity production: A comparison of two non-parametric system identification algorithms

    NASA Astrophysics Data System (ADS)

    Pande, Saket; Sharma, Ashish

    2014-05-01

    This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.

  5. Non-parametric combination and related permutation tests for neuroimaging.

    PubMed

    Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E

    2016-04-01

    In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. PMID:26848101

  6. Bayesian Semi- and Non-parametric Models for Longitudinal Data with Multiple Membership Effects in R

    PubMed Central

    Savitsky, Terrance D.; Paddock, Susan M.

    2014-01-01

    We introduce growcurves for R that performs analysis of repeated measures multiple membership (MM) data. This data structure arises in studies under which an intervention is delivered to each subject through the subject's participation in a set of multiple elements that characterize the intervention. In our motivating study design under which subjects receive a group cognitive behavioral therapy (CBT) treatment, an element is a group CBT session and each subject attends multiple sessions that, together, comprise the treatment. The sets of elements, or group CBT sessions, attended by subjects will partly overlap with some of those from other subjects to induce a dependence in their responses. The growcurves package offers two alternative sets of hierarchical models: 1. Separate terms are specified for multivariate subject and MM element random effects, where the subject effects are modeled under a Dirichlet process prior to produce a semi-parametric construction; 2. A single term is employed to model joint subject-by-MM effects. A fully non-parametric dependent Dirichlet process formulation allows exploration of differences in subject responses across different MM elements. This model allows for borrowing information among subjects who express similar longitudinal trajectories for flexible estimation. growcurves deploys “estimation” functions to perform posterior sampling under a suite of prior options. An accompanying set of “plot” functions allow the user to readily extract by-subject growth curves. The design approach intends to anticipate inferential goals with tools that fully extract information from repeated measures data. Computational efficiency is achieved by performing the sampling for estimation functions using compiled C++. PMID:25400517

  7. Testing for predator dependence in predator-prey dynamics: a non-parametric approach.

    PubMed

    Jost, C; Ellner, S P

    2000-08-22

    The functional response is a key element in all predator-prey interactions. Although functional responses are traditionally modelled as being a function of prey density only, evidence is accumulating that predator density also has an important effect. However, much of the evidence comes from artificial experimental arenas under conditions not necessarily representative of the natural system, and neglecting the temporal dynamics of the organism (in particular the effects of prey depletion on the estimated functional response). Here we present a method that removes these limitations by reconstructing the functional response non-parametrically from predator-prey time-series data. This method is applied to data on a protozoan predator-prey interaction, and we obtain significant evidence of predator dependence in the functional response. A crucial element in this analysis is to include time-lags in the prey and predator reproduction rates, and we show that these delays improve the fit of the model significantly. Finally, we compare the non-parametrically reconstructed functional response to parametric forms, and suggest that a modified version of the Hassell-Varley predator interference model provides a simple and flexible function for theoretical investigation and applied modelling. PMID:11467423

  8. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    PubMed Central

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  9. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment.

    PubMed

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H; Maurits, Natasha M

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  10. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  11. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  12. Robust non-parametric one-sample tests for the analysis of recurrent events.

    PubMed

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population.

  13. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively). PMID:24657094

  14. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively).

  15. A Bayesian non-parametric Potts model with application to pre-surgical FMRI data.

    PubMed

    Johnson, Timothy D; Liu, Zhuqing; Bartsch, Andreas J; Nichols, Thomas E

    2013-08-01

    The Potts model has enjoyed much success as a prior model for image segmentation. Given the individual classes in the model, the data are typically modeled as Gaussian random variates or as random variates from some other parametric distribution. In this article, we present a non-parametric Potts model and apply it to a functional magnetic resonance imaging study for the pre-surgical assessment of peritumoral brain activation. In our model, we assume that the Z-score image from a patient can be segmented into activated, deactivated, and null classes, or states. Conditional on the class, or state, the Z-scores are assumed to come from some generic distribution which we model non-parametrically using a mixture of Dirichlet process priors within the Bayesian framework. The posterior distribution of the model parameters is estimated with a Markov chain Monte Carlo algorithm, and Bayesian decision theory is used to make the final classifications. Our Potts prior model includes two parameters, the standard spatial regularization parameter and a parameter that can be interpreted as the a priori probability that each voxel belongs to the null, or background state, conditional on the lack of spatial regularization. We assume that both of these parameters are unknown, and jointly estimate them along with other model parameters. We show through simulation studies that our model performs on par, in terms of posterior expected loss, with parametric Potts models when the parametric model is correctly specified and outperforms parametric models when the parametric model in misspecified. PMID:22627277

  16. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  17. Non-parametric star formation histories for four dwarf spheroidal galaxies of the Local Group

    NASA Astrophysics Data System (ADS)

    Hernandez, X.; Gilmore, Gerard; Valls-Gabaud, David

    2000-10-01

    We use recent Hubble Space Telescope colour-magnitude diagrams of the resolved stellar populations of a sample of local dSph galaxies (Carina, Leo I, Leo II and Ursa Minor) to infer the star formation histories of these systems, SFR(t). Applying a new variational calculus maximum likelihood method, which includes a full Bayesian analysis and allows a non-parametric estimate of the function one is solving for, we infer the star formation histories of the systems studied. This method has the advantage of yielding an objective answer, as one need not assume a priori the form of the function one is trying to recover. The results are checked independently using Saha's W statistic. The total luminosities of the systems are used to normalize the results into physical units and derive SN type II rates. We derive the luminosity-weighted mean star formation history of this sample of galaxies.

  18. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  19. Non-parametric reconstruction of cosmological matter perturbations

    NASA Astrophysics Data System (ADS)

    González, J. E.; Alcaniz, J. S.; Carvalho, J. C.

    2016-04-01

    Perturbative quantities, such as the growth rate (f) and index (γ), are powerful tools to distinguish different dark energy models or modified gravity theories even if they produce the same cosmic expansion history. In this work, without any assumption about the dynamics of the Universe, we apply a non-parametric method to current measurements of the expansion rate H(z) from cosmic chronometers and high-z quasar data and reconstruct the growth factor and rate of linearised density perturbations in the non-relativistic matter component. Assuming realistic values for the matter density parameter Ωm0, as provided by current CMB experiments, we also reconstruct the evolution of the growth index γ with redshift. We show that the reconstruction of current H(z) data constrains the growth index to γ=0.56 ± 0.12 (2σ) at z = 0.09, which is in full agreement with the prediction of the ΛCDM model and some of its extensions.

  20. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  1. Non-parametric Algorithm to Isolate Chunks in Response Sequences

    PubMed Central

    Alamia, Andrea; Solopchuk, Oleg; Olivier, Etienne; Zenon, Alexandre

    2016-01-01

    Chunking consists in grouping items of a sequence into small clusters, named chunks, with the assumed goal of lessening working memory load. Despite extensive research, the current methods used to detect chunks, and to identify different chunking strategies, remain discordant and difficult to implement. Here, we propose a simple and reliable method to identify chunks in a sequence and to determine their stability across blocks. This algorithm is based on a ranking method and its major novelty is that it provides concomitantly both the features of individual chunk in a given sequence, and an overall index that quantifies the chunking pattern consistency across sequences. The analysis of simulated data confirmed the validity of our method in different conditions of noise, chunk lengths and chunk numbers; moreover, we found that this algorithm was particularly efficient in the noise range observed in real data, provided that at least 4 sequence repetitions were included in each experimental block. Furthermore, we applied this algorithm to actual reaction time series gathered from 3 published experiments and were able to confirm the findings obtained in the original reports. In conclusion, this novel algorithm is easy to implement, is robust to outliers and provides concurrent and reliable estimation of chunk position and chunking dynamics, making it useful to study both sequence-specific and general chunking effects. The algorithm is available at: https://github.com/artipago/Non-parametric-algorithm-to-isolate-chunks-in-response-sequences. PMID:27708565

  2. Time-frequency analysis for parametric and non-parametric identification of nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Frank Pai, P.

    2013-04-01

    This paper points out the differences between linear and nonlinear system identification tasks, shows that time-frequency analysis is most appropriate for nonlinearity identification, and presents advanced signal processing techniques that combine time-frequency decomposition and perturbation methods for parametric and non-parametric identification of nonlinear dynamical systems. Hilbert-Huang transform (HHT) is a recent data-driven adaptive time-frequency analysis technique that combines the use of empirical mode decomposition (EMD) and Hilbert transform (HT). Because EMD does not use predetermined basis functions and function orthogonality for component extraction, HHT provides more concise component decomposition and more accurate time-frequency analysis than the short-time Fourier transform and wavelet transform for extraction of system characteristics and nonlinearities. However, HHT's accuracy seriously suffers from the end effect caused by the discontinuity-induced Gibbs' phenomenon. Moreover, because HHT requires a long set of data obtained by high-frequency sampling, it is not appropriate for online frequency tracking. This paper presents a conjugate-pair decomposition (CPD) method that requires only a few recent data points sampled at a low-frequency for sliding-window point-by-point adaptive time-frequency analysis and can be used for online frequency tracking. To improve adaptive time-frequency analysis, a methodology is developed by combining EMD and CPD for noise filtering in the time domain, reducing the end effect, and dissolving other mathematical and numerical problems in time-frequency analysis. For parametric identification of a nonlinear system, the methodology processes one steady-state response and/or one free damped transient response and uses amplitude-dependent dynamic characteristics derived from perturbation analysis to determine the type and order of nonlinearity and system parameters. For non-parametric identification, the methodology

  3. Non-parametric frequency analysis of extreme values for integrated disaster management considering probable maximum events

    NASA Astrophysics Data System (ADS)

    Takara, K. T.

    2015-12-01

    This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.

  4. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    NASA Astrophysics Data System (ADS)

    González, Adriana; Delouille, Véronique; Jacques, Laurent

    2016-01-01

    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  5. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  6. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José

    2015-10-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast

  7. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap.

  8. Non-parametric inferences on climate change of high-resolution spatial patterns of precipitation extremes in Iberia

    NASA Astrophysics Data System (ADS)

    Melo-Gonçalves, Paulo; Rocha, Alfredo; Pinto, Joaquim; Santos, João; Corte-Real, João

    2013-04-01

    Precipitation daily-total data, obtained a multi-model ensemble of Regional Climate Model (RCM) simulations provided by the EU FP6 Integrated Project ENSEMBLES, is analysed at a horizontal spatial resolution of 25 km in the Iberian Peninsula (IP). ENSEMBLES' RCMs were driven by boundary conditions imposed by General Circulation Models (GCMs) that ran under historic conditions from 1961 to 2000, and under the SRES A1B scenario from 2001 to 2100. Annual and seasonal indices of precipitation extremes, proposed by the CCI/CLIVAR/JCOMM Expert Team on Climate Change Detection and Indices (ETCCDI), were derived from the daily precipitation ensemble. The ensemble of ETCCDI indices is subjected to climate detection methods in order to identify Iberian regions projected to experience higher climate change. Non-parametric climate change detection methods are applied to each member of the ETCCDI multi-model ensemble (ETCCDI-MME) and to and to its median (ETCCDI-MMEM). The resulting statistics are used to infer climate change projections and associated uncertainties. Climate change projections are evaluated from the statistics obtained from the ETCCDI-MMEM, while the uncertainties of those projections are evaluated by a rank-based measure of the spread of these statistics across the ETCCDI-MME. All methods consist of an estimator whose realization, or estimate, is tested by a non-parametric hypothesis test: (i) Theil-Sen linear trend, from 1961 to 2100, tested by the Mann-Kendall test; (ii) differences between the climatologies, estimated by the time median, of a near-future (2021-2050) and a distant-future (2071-2100) climates from the climatology of a recent-past reference climate (1961-1990), tested by the Mann-Whiteney test; and (iii) difference between the Probability Distributions of the near and distant climates from that of the reference climate, tested by the Kolmogorov-Smirnov test. IP regions with statistically significant, at 0.05 level, projected climate change

  9. THE DARK MATTER PROFILE OF THE MILKY WAY: A NON-PARAMETRIC RECONSTRUCTION

    SciTech Connect

    Pato, Miguel; Iocco, Fabio

    2015-04-10

    We present the results of a new, non-parametric method to reconstruct the Galactic dark matter profile directly from observations. Using the latest kinematic data to track the total gravitational potential and the observed distribution of stars and gas to set the baryonic component, we infer the dark matter contribution to the circular velocity across the Galaxy. The radial derivative of this dynamical contribution is then estimated to extract the dark matter profile. The innovative feature of our approach is that it makes no assumption on the functional form or shape of the profile, thus allowing for a clean determination with no theoretical bias. We illustrate the power of the method by constraining the spherical dark matter profile between 2.5 and 25 kpc away from the Galactic center. The results show that the proposed method, free of widely used assumptions, can already be applied to pinpoint the dark matter distribution in the Milky Way with competitive accuracy, and paves the way for future developments.

  10. The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches

    NASA Astrophysics Data System (ADS)

    Bucher, Martin; Racine, Benjamin; van Tent, Bartjan

    2016-05-01

    We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called fNL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in order to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.

  11. Bayesian non-parametric approaches to reconstructing oscillatory systems and the Nyquist limit

    NASA Astrophysics Data System (ADS)

    Žurauskienė, Justina; Kirk, Paul; Thorne, Thomas; Stumpf, Michael P. H.

    Reconstructing continuous signals from discrete time-points is a challenging inverse problem encountered in many scientific and engineering applications. For oscillatory signals classical results due to Nyquist set the limit below which it becomes impossible to reliably reconstruct the oscillation dynamics. Here we revisit this problem for vector-valued outputs and apply Bayesian non-parametric approaches in order to solve the function estimation problem. The main aim of the current paper is to map how we can use of correlations among different outputs to reconstruct signals at a sampling rate that lies below the Nyquist rate. We show that it is possible to use multiple-output Gaussian processes to capture dependences between outputs which facilitate reconstruction of signals in situation where conventional Gaussian processes (i.e. this aimed at describing scalar signals) fail, and we delineate the phase and frequency dependence of the reliability of this type of approach. In addition to simple toy-models we also consider the dynamics of the tumour suppressor gene p53, which exhibits oscillations under physiological conditions, and which can be reconstructed more reliably in our new framework.

  12. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  13. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.

  14. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. PMID:25524862

  15. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  16. Establishment of Biological Reference Intervals and Reference Curve for Urea by Exploratory Parametric and Non-Parametric Quantile Regression Models

    PubMed Central

    2013-01-01

    Background: The validity of the entire renal function tests as a diagnostic tool depends substantially on the Biological Reference Interval (BRI) of urea. Establishment of BRI of urea is difficult partly because exclusion criteria for selection of reference data are quite rigid and partly due to the compartmentalization considerations regarding age and sex of the reference individuals. Moreover, construction of Biological Reference Curve (BRC) of urea is imperative to highlight the partitioning requirements. Materials and Methods: This a priori study examines the data collected by measuring serum urea of 3202 age and sex matched individuals, aged between 1 and 80 years, by a kinetic UV Urease/GLDH method on a Roche Cobas 6000 auto-analyzer. Results: Mann-Whitney U test of the reference data confirmed the partitioning requirement by both age and sex. Further statistical analysis revealed the incompatibility of the data for a proposed parametric model. Hence the data was non-parametrically analysed. BRI was found to be identical for both sexes till the 2nd decade, and the BRI for males increased progressively 6th decade onwards. Four non-parametric models were postulated for construction of BRC: Gaussian kernel, double kernel, local mean and local constant, of which the last one generated the best-fitting curves. Conclusion: Clinical decision making should become easier and diagnostic implications of renal function tests should become more meaningful if this BRI is followed and the BRC is used as a desktop tool in conjunction with similar data for serum creatinine.

  17. Non-parametric seismic hazard analysis in the presence of incomplete data

    NASA Astrophysics Data System (ADS)

    Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush

    2016-07-01

    The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.

  18. Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes

    PubMed Central

    Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D.

    2016-01-01

    This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. PMID:26993062

  19. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  20. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  1. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width. PMID:25785608

  2. Program Computes Thermodynamic Functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1994-01-01

    PAC91 is latest in PAC (Properties and Coefficients) series. Two principal features are to provide means of (1) generating theoretical thermodynamic functions from molecular constants and (2) least-squares fitting of these functions to empirical equations. PAC91 written in FORTRAN 77 to be machine-independent.

  3. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  4. Non-parametric deprojection of surface brightness profiles of galaxies in generalised geometries

    NASA Astrophysics Data System (ADS)

    Chakrabarty, D.

    2010-02-01

    Aims: We present a new Bayesian non-parametric deprojection algorithm DOPING (Deprojection of Observed Photometry using an INverse Gambit), that is designed to extract 3-D luminosity density distributions ρ from observed surface brightness maps I, in generalised geometries, while taking into account changes in intrinsic shape with radius, using a penalised likelihood approach and an Markov Chain Monte Carlo optimiser. Methods: We provide the most likely solution to the integral equation that represents deprojection of the measured I to ρ. In order to keep the solution modular, we choose to express ρ as a function of the line-of-sight (LOS) coordinate z. We calculate the extent of the system along the z-axis, for a given point on the image that lies within an identified isophotal annulus. The extent along the LOS is binned and density is held a constant over each such z-bin. The code begins with a seed density and at the beginning of an iterative step, the trial ρ is updated. Comparison of the projection of the current choice of ρ and the observed I defines the likelihood function (which is supplemented by Laplacian regularisation), the maximal region of which is sought by the optimiser (Metropolis Hastings). Results: The algorithm is successfully tested on a set of test galaxies, the morphology of which ranges from an elliptical galaxy with varying eccentricity to an infinitesimally thin disk galaxy marked by an abruptly varying eccentricity profile. Applications are made to faint dwarf elliptical galaxy Ic 3019 and another dwarf elliptical that is characterised by a central spheroidal nuclear component superimposed upon a more extended flattened component. The result of deprojection of the X-ray image of cluster A1413 - assumed triaxial - the axial ratios and inclination of which are taken from the literature, is also presented.

  5. Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation

    NASA Astrophysics Data System (ADS)

    Pentaris, Fragkiskos P.; Fouskitakis, George N.

    2014-05-01

    The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5

  6. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  7. Automatic computation of transfer functions

    SciTech Connect

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  8. Non-Parametric Bayesian Human Motion Recognition Using a Single MEMS Tri-Axial Accelerometer

    PubMed Central

    Ahmed, M. Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  9. Bayesian inference for longitudinal data with non-parametric treatment effects

    PubMed Central

    Müller, Peter; Quintana, Fernando A.; Rosner, Gary L.; Maitland, Michael L.

    2014-01-01

    We consider inference for longitudinal data based on mixed-effects models with a non-parametric Bayesian prior on the treatment effect. The proposed non-parametric Bayesian prior is a random partition model with a regression on patient-specific covariates. The main feature and motivation for the proposed model is the use of covariates with a mix of different data formats and possibly high-order interactions in the regression. The regression is not explicitly parameterized. It is implied by the random clustering of subjects. The motivating application is a study of the effect of an anticancer drug on a patient's blood pressure. The study involves blood pressure measurements taken periodically over several 24-h periods for 54 patients. The 24-h periods for each patient include a pretreatment period and several occasions after the start of therapy. PMID:24285773

  10. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  11. Meta-Analysis of Candidate Gene Effects Using Bayesian Parametric and Non-Parametric Approaches

    PubMed Central

    Wu, Xiao-Lin; Gianola, Daniel; Rosa, Guilherme J. M.; Weigel, Kent A.

    2014-01-01

    Candidate gene (CG) approaches provide a strategy for identification and characterization of major genes underlying complex phenotypes such as production traits and susceptibility to diseases, but the conclusions tend to be inconsistent across individual studies. Meta-analysis approaches can deal with these situations, e.g., by pooling effect-size estimates or combining P values from multiple studies. In this paper, we evaluated the performance of two types of statistical models, parametric and non-parametric, for meta-analysis of CG effects using simulated data. Both models estimated a “central” effect size while taking into account heterogeneity over individual studies. The empirical distribution of study-specific CG effects was multi-modal. The parametric model assumed a normal distribution for the study-specific CG effects whereas the non-parametric model relaxed this assumption by posing a more general distribution with a Dirichlet process prior (DPP). Results indicated that the meta-analysis approaches could reduce false positive or false negative rates by pooling strengths from multiple studies, as compared to individual studies. In addition, the non-parametric, DPP model captured the variation of the “data” better than its parametric counterpart. PMID:25057320

  12. Non-parametric determination of H and He interstellar fluxes from cosmic-ray data

    NASA Astrophysics Data System (ADS)

    Ghelfi, A.; Barao, F.; Derome, L.; Maurin, D.

    2016-06-01

    Context. Top-of-atmosphere (TOA) cosmic-ray (CR) fluxes from satellites and balloon-borne experiments are snapshots of the solar activity imprinted on the interstellar (IS) fluxes. Given a series of snapshots, the unknown IS flux shape and the level of modulation (for each snapshot) can be recovered. Aims: We wish (i) to provide the most accurate determination of the IS H and He fluxes from TOA data alone; (ii) to obtain the associated modulation levels (and uncertainties) while fully accounting for the correlations with the IS flux uncertainties; and (iii) to inspect whether the minimal force-field approximation is sufficient to explain all the data at hand. Methods: Using H and He TOA measurements, including the recent high-precision AMS, BESS-Polar, and PAMELA data, we performed a non-parametric fit of the IS fluxes JISH,~He and modulation level φi for each data-taking period. We relied on a Markov chain Monte Carlo (MCMC) engine to extract the probability density function and correlations (hence the credible intervals) of the sought parameters. Results: Although H and He are the most abundant and best measured CR species, several datasets had to be excluded from the analysis because of inconsistencies with other measurements. From the subset of data passing our consistency cut, we provide ready-to-use best-fit and credible intervals for the H and He IS fluxes from MeV/n to PeV/n energy (with a relative precision in the range [ 2-10% ] at 1σ). Given the strong correlation between JIS and φi parameters, the uncertainties on JIS translate into Δφ ≈ ± 30 MV (at 1σ) for all experiments. We also find that the presence of 3He in He data biases φ towards higher φ values by ~30 MV. The force-field approximation, despite its limitation, gives an excellent (χ2/d.o.f. = 1.02) description of the recent high-precision TOA H and He fluxes. Conclusions: The analysis must be extended to different charge species and more realistic modulation models. It would benefit

  13. Neural computation of arithmetic functions

    NASA Technical Reports Server (NTRS)

    Siu, Kai-Yeung; Bruck, Jehoshua

    1990-01-01

    An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.

  14. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  15. Non-parametric trend analysis of water quality data of rivers in Kansas

    USGS Publications Warehouse

    Yu, Y.-S.; Zou, S.; Whittemore, D.

    1993-01-01

    Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.

  16. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  17. Computational complexity of Boolean functions

    NASA Astrophysics Data System (ADS)

    Korshunov, Aleksei D.

    2012-02-01

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  18. Measuring Dark Matter Profiles Non-Parametrically in Dwarf Spheroidals: An Application to Draco

    NASA Astrophysics Data System (ADS)

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Drory, Niv; Williams, Michael J.

    2013-02-01

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 <= r <= 700 pc. The profile for r >= 20 pc is well fit by a power law with slope α = -1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  19. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  20. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    SciTech Connect

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-02-15

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 {<=} r {<=} 700 pc. The profile for r {>=} 20 pc is well fit by a power law with slope {alpha} = -1.0 {+-} 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  1. The merger fraction of active and inactive galaxies in the local Universe through an improved non-parametric classification

    NASA Astrophysics Data System (ADS)

    Cotini, Stefano; Ripamonti, Emanuele; Caccianiga, Alessandro; Colpi, Monica; Della Ceca, Roberto; Mapelli, Michela; Severgnini, Paola; Segreto, Alberto

    2013-05-01

    We investigate the possible link between mergers and the enhanced activity of supermassive black holes (SMBHs) at the centre of galaxies, by comparing the merger fraction of a local sample (0.003 ≤ z < 0.03) of active galaxies - 59 active galactic nuclei host galaxies selected from the All-Sky Swift Burst Alert Telescope (BAT) Survey - with an appropriate control sample (247 sources extracted from the HyperLeda catalogue) that has the same redshift distribution as the BAT sample. We detect the interacting systems in the two samples on the basis of non-parametric structural indexes of concentration (C), asymmetry (A), clumpiness (S), Gini coefficient (G) and second-order momentum of light (M20). In particular, we propose a new morphological criterion, based on a combination of all these indexes, that improves the identification of interacting systems. We also present a new software - PyCASSo (PYTHON CAS software) - for the automatic computation of the structural indexes. After correcting for the completeness and reliability of the method, we find that the fraction of interacting galaxies among the active population (20{^{+ 7}_{- 5}} per cent) exceeds the merger fraction of the control sample (4{^{+ 1.7}_{- 1.2}} per cent). Choosing a mass-matched control sample leads to equivalent results, although with slightly lower statistical significance. Our findings support the scenario in which mergers trigger the nuclear activity of SMBHs.

  2. Non-parametric reconstruction of an inflaton potential from Einstein-Cartan-Sciama-Kibble gravity with particle production

    NASA Astrophysics Data System (ADS)

    Desai, Shantanu; Popławski, Nikodem J.

    2016-04-01

    The coupling between spin and torsion in the Einstein-Cartan-Sciama-Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ∼10-42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.

  3. Non-parametric analysis of LANDSAT maps using neural nets and parallel computers

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1991-01-01

    Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.

  4. Comparisons of parametric and non-parametric classification rules for e-nose and e-tongue

    NASA Astrophysics Data System (ADS)

    Mahat, Nor Idayu; Zakaria, Ammar; Shakaff, Ali Yeon Md

    2015-12-01

    This paper evaluates the performance of parametric and non-parametric classification rules in sensor technology. The growing of sensor technologies, e-nose and e-tongue, has urged engineers to equip themselves with the utmost recent and advanced statistical approaches. As data collected from e-nose and e-tongue face some complexities, often data pre-processing and transformation are performed prior to the classification. This paper discusses the comparisons made on some known parametric and non-parametric classification rules in the application for classifying data of e-nose and e-tongue. The comparisons which based on leave-one-out accuracy, sensitivity and specificity shows that non-parametric approaches especially k-nearest neighbour does not much distorted with changes of distribution, but Naïve Bayes is greatly influenced by the structure of the data.

  5. Non-parametric Evaluation of Biomarker Accuracy under Nested Case-control Studies

    PubMed Central

    Cai, Tianxi; Zheng, Yingye

    2012-01-01

    Summary To evaluate the clinical utility of new risk markers, a crucial step is to measure their predictive accuracy with prospective studies. However, it is often infeasible to obtain marker values for all study participants. The nested case-control (NCC) design is a useful cost-effective strategy for such settings. Under the NCC design, markers are only ascertained for cases and a fraction of controls sampled randomly from the risk sets. The outcome dependent sampling generates a complex data structure and therefore a challenge for analysis. Existing methods for analyzing NCC studies focus primarily on association measures. Here, we propose a class of non-parametric estimators for commonly used accuracy measures. We derived asymptotic expansions for accuracy estimators based on both finite population and Bernoulli sampling and established asymptotic equivalence between the two. Simulation results suggest that the proposed procedures perform well in finite samples. The new procedures were illustrated with data from the Framingham Offspring study. PMID:22844169

  6. Assessing T Cell Clonal Size Distribution: A Non-Parametric Approach

    PubMed Central

    Bolkhovskaya, Olesya V.; Zorin, Daniil Yu.; Ivanchenko, Mikhail V.

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity. PMID:25275470

  7. Developing two non-parametric performance models for higher learning institutions

    NASA Astrophysics Data System (ADS)

    Kasim, Maznah Mat; Kashim, Rosmaini; Rahim, Rahela Abdul; Khan, Sahubar Ali Muhamed Nadhar

    2016-08-01

    Measuring the performance of higher learning Institutions (HLIs) is a must for these institutions to improve their excellence. This paper focuses on formation of two performance models: efficiency and effectiveness models by utilizing a non-parametric method, Data Envelopment Analysis (DEA). The proposed models are validated by measuring the performance of 16 public universities in Malaysia for year 2008. However, since data for one of the variables is unavailable, an estimate was used as a proxy to represent the real data. The results show that average efficiency and effectiveness scores were 0.817 and 0.900 respectively, while six universities were fully efficient and eight universities were fully effective. A total of six universities were both efficient and effective. It is suggested that the two proposed performance models would work as complementary methods to the existing performance appraisal method or as alternative methods in monitoring the performance of HLIs especially in Malaysia.

  8. Assessing T cell clonal size distribution: a non-parametric approach.

    PubMed

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  9. Assessing T cell clonal size distribution: a non-parametric approach.

    PubMed

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity. PMID:25275470

  10. Metacognition: computation, biology and function.

    PubMed

    Fleming, Stephen M; Dolan, Raymond J; Frith, Christopher D

    2012-05-19

    Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746

  11. Metacognition: computation, biology and function

    PubMed Central

    Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.

    2012-01-01

    Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746

  12. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  13. Non-parametric photic entrainment of Djungarian hamsters with different rhythmic phenotypes.

    PubMed

    Schöttner, Konrad; Hauer, Jane; Weinert, Dietmar

    2016-01-01

    To investigate the role of non-parametric light effects in entrainment, Djungarian hamsters of two different circadian phenotypes were exposed to skeleton photoperiods, or to light pulses at different circadian times, to compile phase response curves (PRCs). Wild-type (WT) hamsters show daily rhythms of locomotor activity in accord with the ambient light/dark conditions, with activity onset and offset strongly coupled to light-off and light-on, respectively. Hamsters of the delayed activity onset (DAO) phenotype, in contrast, progressively delay their activity onset, whereas activity offset remains coupled to light-on. The present study was performed to better understand the underlying mechanisms of this phenomenon. Hamsters of DAO and WT phenotypes were kept first under standard housing conditions with a 14:10 h light-dark cycle, and then exposed to skeleton photoperiods (one or two 15-min light pulses of 100 lx at the times of the former light-dark and/or dark-light transitions). In a second experiment, hamsters of both phenotypes were transferred to constant darkness and allowed to free-run until the lengths of the active (α) and resting (ρ) periods were equal (α:ρ = 1). At this point, animals were then exposed to light pulses (100 lx, 15 min) at different circadian times (CTs). Phase and period changes were estimated separately for activity onset and offset. When exposed to skeleton-photoperiods with one or two light pulses, the daily activity patterns of DAO and WT hamsters were similar to those obtained under conditions of a complete 14:10 h light-dark cycle. However, in the case of giving only one light pulse at the time of the former light-dark transition, animals temporarily free-ran until activity offset coincided with the light pulse. These results show that photic entrainment of the circadian activity rhythm is attained primarily via non-parametric mechanisms, with the "morning" light pulse being the essential cue. In the second experiment, typical

  14. Computation of generating functions for biological molecules

    SciTech Connect

    Howell, J.A.; Smith, T.F.; Waterman, M.S.

    1980-08-01

    The object of this paper is to give algorithms and techniques for computing generating functions of certain RNA configurations. Combinatorics and symbolic computation are utilized to calculate the generating functions for small RNA molecules. From these generating functions, it is possible to obtain information about the bonding and structure of the molecules. Specific examples of interest to biology are given and discussed.

  15. Non-parametric three-way mixed ANOVA with aligned rank tests.

    PubMed

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods. PMID:24303958

  16. Non-parametric strong lens inversion of Cl 0024+1654: illustrating the monopole degeneracy

    NASA Astrophysics Data System (ADS)

    Liesenborgs, J.; de Rijcke, S.; Dejonghe, H.; Bekaert, P.

    2008-09-01

    The cluster lens Cl 0024+1654 is undoubtedly one of the most beautiful examples of strong gravitational lensing, providing five large images of a single source with well-resolved substructure. Using the information contained in the positions and the shapes of the images, combined with the null space information, a non-parametric technique is used to infer the strong lensing mass map of the central region of this cluster. This yields a strong lensing mass of 1.60 × 1014Msolar within a 0.5arcmin radius around the cluster centre. This mass distribution is then used as a case study of the monopole degeneracy, which may be one of the most important degeneracies in gravitational lensing studies and which is extremely hard to break. We illustrate the monopole degeneracy by adding circularly symmetric density distributions with zero total mass to the original mass map of Cl 0024+1654. These redistribute mass in certain areas of the mass map without affecting the observed images in any way. We show that the monopole degeneracy and the mass-sheet degeneracy together lie at the heart of the discrepancies between different gravitational lens reconstructions that can be found in the literature for a given object, and that many images/sources, with an overall high image density in the lens plane, are required to construct an accurate, high-resolution mass map based on strong lensing data.

  17. Non-parametric three-way mixed ANOVA with aligned rank tests.

    PubMed

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.

  18. Two non-parametric methods for derivation of constraints from radiotherapy dose-histogram data

    NASA Astrophysics Data System (ADS)

    Ebert, M. A.; Gulliford, S. L.; Buettner, F.; Foo, K.; Haworth, A.; Kennedy, A.; Joseph, D. J.; Denham, J. W.

    2014-07-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose-histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization.

  19. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  20. Water quality analysis in rivers with non-parametric probability distributions and fuzzy inference systems: application to the Cauca River, Colombia.

    PubMed

    Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L

    2013-02-01

    The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies.

  1. Revisiting the Distance Duality Relation using a non-parametric regression method

    NASA Astrophysics Data System (ADS)

    Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha

    2016-07-01

    The interdependence of luminosity distance, DL and angular diameter distance, DA given by the distance duality relation (DDR) is very significant in observational cosmology. It is very closely tied with the temperature-redshift relation of Cosmic Microwave Background (CMB) radiation. Any deviation from η(z)≡ DL/DA (1+z)2 =1 indicates a possible emergence of new physics. Our aim in this work is to check the consistency of these relations using a non-parametric regression method namely, LOESS with SIMEX. This technique avoids dependency on the cosmological model and works with a minimal set of assumptions. Further, to analyze the efficiency of the methodology, we simulate a dataset of 020 points of η (z) data based on a phenomenological model η(z)= (1+z)epsilon. The error on the simulated data points is obtained by using the temperature of CMB radiation at various redshifts. For testing the distance duality relation, we use the JLA SNe Ia data for luminosity distances, while the angular diameter distances are obtained from radio galaxies datasets. Since the DDR is linked with CMB temperature-redshift relation, therefore we also use the CMB temperature data to reconstruct η (z). It is important to note that with CMB data, we are able to study the evolution of DDR upto a very high redshift z = 2.418. In this analysis, we find no evidence of deviation from η=1 within a 1σ region in the entire redshift range used in this analysis (0 < z <= 2.418).

  2. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    PubMed Central

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  3. Parametric vs. non-parametric daily weather generator: validation and comparison

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin

    2016-04-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  4. Non-parametric bootstrapping method for measuring the temporal discrimination threshold for movement disorders

    NASA Astrophysics Data System (ADS)

    Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.

    2015-08-01

    Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.

  5. Does sunspot numbers cause global temperatures? A reconsideration using non-parametric causality tests

    NASA Astrophysics Data System (ADS)

    Hassani, Hossein; Huang, Xu; Gupta, Rangan; Ghodsi, Mansi

    2016-10-01

    In a recent paper, Gupta et al., (2015), analyzed whether sunspot numbers cause global temperatures based on monthly data covering the period 1880:1-2013:9. The authors find that standard time domain Granger causality test fails to reject the null hypothesis that sunspot numbers do not cause global temperatures for both full and sub-samples, namely 1880:1-1936:2, ​1936:3-1986:11 and 1986:12-2013:9 (identified based on tests of structural breaks). However, frequency domain causality test detects predictability for the full-sample at short (2-2.6 months) cycle lengths, but not the sub-samples. But since, full-sample causality cannot be relied upon due to structural breaks, Gupta et al., (2015) conclude that the evidence of causality running from sunspot numbers to global temperatures is weak and inconclusive. Given the importance of the issue of global warming, our current paper aims to revisit this issue of whether sunspot numbers cause global temperatures, using the same data set and sub-samples used by Gupta et al., (2015), based on an nonparametric Singular Spectrum Analysis (SSA)-based causality test. Based on this test, we however, show that sunspot numbers have predictive ability for global temperatures for the three sub-samples, over and above the full-sample. Thus, generally speaking, our non-parametric SSA-based causality test outperformed both time domain and frequency domain causality tests and highlighted that sunspot numbers have always been important in predicting global temperatures.

  6. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods.

    PubMed

    Johnson, Heather O; Gupta, Satish C; Vecchia, Aldo V; Zvomuya, Francis

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO(3)-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO(3)-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region.

  7. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods

    USGS Publications Warehouse

    Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  8. Computational Modeling of Mitochondrial Function

    PubMed Central

    Cortassa, Sonia; Aon, Miguel A.

    2012-01-01

    The advent of techniques with the ability to scan massive changes in cellular makeup (genomics, proteomics, etc.) has revealed the compelling need for analytical methods to interpret and make sense of those changes. Computational models built on sound physico-chemical mechanistic basis are unavoidable at the time of integrating, interpreting, and simulating high-throughput experimental data. Another powerful role of computational models is predicting new behavior provided they are adequately validated. Mitochondrial energy transduction has been traditionally studied with thermodynamic models. More recently, kinetic or thermo-kinetic models have been proposed, leading the path toward an understanding of the control and regulation of mitochondrial energy metabolism and its interaction with cytoplasmic and other compartments. In this work, we outline the methods, step-by-step, that should be followed to build a computational model of mitochondrial energetics in isolation or integrated to a network of cellular processes. Depending on the question addressed by the modeler, the methodology explained herein can be applied with different levels of detail, from the mitochondrial energy producing machinery in a network of cellular processes to the dynamics of a single enzyme during its catalytic cycle. PMID:22057575

  9. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609

  10. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  11. Parametric and non-parametric species delimitation methods result in the recognition of two new Neotropical woody bamboo species.

    PubMed

    Ruiz-Sanchez, Eduardo

    2015-12-01

    The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata.

  12. [Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].

    PubMed

    Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K

    2016-01-01

    Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period. PMID:27534064

  13. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    PubMed

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  14. [Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].

    PubMed

    Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K

    2016-01-01

    Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period.

  15. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    PubMed

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  16. Validation of two (parametric vs non-parametric) daily weather generators

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Skalak, P.

    2015-12-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  17. On computation of Hough functions

    NASA Astrophysics Data System (ADS)

    Wang, Houjun; Boyd, John P.; Akmaev, Rashid A.

    2016-04-01

    Hough functions are the eigenfunctions of the Laplace tidal equation governing fluid motion on a rotating sphere with a resting basic state. Several numerical methods have been used in the past. In this paper, we compare two of those methods: normalized associated Legendre polynomial expansion and Chebyshev collocation. Both methods are not widely used, but both have some advantages over the commonly used unnormalized associated Legendre polynomial expansion method. Comparable results are obtained using both methods. For the first method we note some details on numerical implementation. The Chebyshev collocation method was first used for the Laplace tidal problem by Boyd (1976) and is relatively easy to use. A compact MATLAB code is provided for this method. We also illustrate the importance and effect of including a parity factor in Chebyshev polynomial expansions for modes with odd zonal wave numbers.

  18. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  19. Non-parametric estimation of seasonal variations in GNSS-derived time series

    NASA Astrophysics Data System (ADS)

    Gruszczynska, Marta; Bogusz, Janusz; Klos, Anna

    2015-04-01

    The seasonal variations in GNSS station's position may arise from geophysical excitations, thermal changes combined together with hydrodynamics or various errors which, when superimposed, cause the seasonal oscillations not exactly of real geodynamical origin, but still have to be included in time series modelling. These variations with different periods included in frequency band from Chandler up to quarter-annual ones will all affect the reliability of permanent station's velocity, which in turn, strictly influences the quality of kinematic reference frames. As shown before by a number of authors, the annual (dominant) sine curve, has the amplitude and phase that both change in time due to the different reasons. In this research we focused on the determination of annual changes in GNSS-derived time series of North, East and Up components. We used here the daily position changes from PPP (Precise Point Positioning) solution obtained by JPL (Jet Propulsion Laboratory) processed in the GIPSY-OASIS software. We analyzed here more than 140 globally distributed IGS stations with the minimum data length of 3 years. The longest time series were even 17 years long (1996-2014). Each of the topocentric time series (North, East and Up) was divided into years (from January to December), then the observations gathered in the same days of year were stacked and the weighted medians obtained for all of them such that each of time series was represented by matrix of size 365xn where n is the data length. In this way we obtained the median annual signal for each of analyzed stations that was then decomposed into different frequency bands using wavelet decomposition with Meyer wavelet. We assumed here 7 levels of decomposition, with annual curve as the last approximation of it. The signal approximations made us to obtain the seasonal peaks that prevail in North, East and Up data for globally distributed stations. The analysis of annual curves, by means of non-parametric estimation

  20. Fruits and fruit products. Non-parametric methods for detection of adulteration of concentrated orange juice for manufacturing.

    PubMed

    Schatzki, T F; Vandercook, C E

    1978-07-01

    The composition of organic constituents (total sugars, reactive phenols, total amino acids, arginine, and gamma-aminobutyric acid) has been measured in a large (360 samples) selection of concentrated orange juice for manufacturing and orange pulp wash in the U.S. trade. The detection of adulteration with sugar, reducing sugars, and citric acid addition has been investigated by using non-parametric nearest neighbor classification techniques in the 4-space of log ratios of the compositions. The results show that such detection is possible with a type 1=type 2 error rate of 10% for 20% adulteration if at least 7 samples are taken. The assumptions of such samplings are discussed.

  1. Approximate Bayesian computation with functional statistics.

    PubMed

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  2. Determination of drug absorption rate in time-variant disposition by direct deconvolution using beta clearance correction and end-constrained non-parametric regression.

    PubMed

    Neelakantan, S; Veng-Pedersen, P

    2005-11-01

    A novel numerical deconvolution method is presented that enables the estimation of drug absorption rates under time-variant disposition conditions. The method involves two components. (1) A disposition decomposition-recomposition (DDR) enabling exact changes in the unit impulse response (UIR) to be constructed based on centrally based clearance changes iteratively determined. (2) A non-parametric, end-constrained cubic spline (ECS) input response function estimated by cross-validation. The proposed DDR-ECS method compensates for disposition changes between the test and the reference administrations by using a "beta" clearance correction based on DDR analysis. The representation of the input response by the ECS method takes into consideration the complex absorption process and also ensures physiologically realistic approximations of the response. The stability of the new method to noisy data was evaluated by comprehensive simulations that considered different UIRs, various input functions, clearance changes and a novel scaling of the input function that includes the "flip-flop" absorption phenomena. The simulated input response was also analysed by two other methods and all three methods were compared for their relative performances. The DDR-ECS method provides better estimation of the input profile under significant clearance changes but tends to overestimate the input when there were only small changes in the clearance.

  3. Dynamics and computation in functional shifts

    NASA Astrophysics Data System (ADS)

    Namikawa, Jun; Hashimoto, Takashi

    2004-07-01

    We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.

  4. Computer Games Functioning as Motivation Stimulants

    ERIC Educational Resources Information Center

    Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh

    2011-01-01

    Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…

  5. Computationally efficient method to construct scar functions

    NASA Astrophysics Data System (ADS)

    Revuelta, F.; Vergini, E. G.; Benito, R. M.; Borondo, F.

    2012-02-01

    The performance of a simple method [E. L. Sibert III, E. Vergini, R. M. Benito, and F. Borondo, New J. Phys.NJOPFM1367-263010.1088/1367-2630/10/5/053016 10, 053016 (2008)] to efficiently compute scar functions along unstable periodic orbits with complicated trajectories in configuration space is discussed, using a classically chaotic two-dimensional quartic oscillator as an illustration.

  6. A Simple 2D Non-Parametric Resampling Statistical Approach to Assess Confidence in Species Identification in DNA Barcoding—An Alternative to Likelihood and Bayesian Approaches

    PubMed Central

    Jin, Qian; He, Li-Jun; Zhang, Ai-Bing

    2012-01-01

    In the recent worldwide campaign for the global biodiversity inventory via DNA barcoding, a simple and easily used measure of confidence for assigning sequences to species in DNA barcoding has not been established so far, although the likelihood ratio test and the Bayesian approach had been proposed to address this issue from a statistical point of view. The TDR (Two Dimensional non-parametric Resampling) measure newly proposed in this study offers users a simple and easy approach to evaluate the confidence of species membership in DNA barcoding projects. We assessed the validity and robustness of the TDR approach using datasets simulated under coalescent models, and an empirical dataset, and found that TDR measure is very robust in assessing species membership of DNA barcoding. In contrast to the likelihood ratio test and Bayesian approach, the TDR method stands out due to simplicity in both concepts and calculations, with little in the way of restrictive population genetic assumptions. To implement this approach we have developed a computer program package (TDR1.0beta) freely available from ftp://202.204.209.200/education/video/TDR1.0beta.rar. PMID:23239988

  7. Functional quantum computing: An optical approach

    NASA Astrophysics Data System (ADS)

    Rambo, Timothy M.; Altepeter, Joseph B.; Kumar, Prem; D'Ariano, G. Mauro

    2016-05-01

    Recent theoretical investigations treat quantum computations as functions, quantum processes which operate on other quantum processes, rather than circuits. Much attention has been given to the N -switch function which takes N black-box quantum operators as input, coherently permutes their ordering, and applies the result to a target quantum state. This is something which cannot be equivalently done using a quantum circuit. Here, we propose an all-optical system design which implements coherent operator permutation for an arbitrary number of input operators.

  8. Analysis of Ventricular Function by Computed Tomography

    PubMed Central

    Rizvi, Asim; Deaño, Roderick C.; Bachman, Daniel P.; Xiong, Guanglei; Min, James K.; Truong, Quynh A.

    2014-01-01

    The assessment of ventricular function, cardiac chamber dimensions and ventricular mass is fundamental for clinical diagnosis, risk assessment, therapeutic decisions, and prognosis in patients with cardiac disease. Although cardiac computed tomography (CT) is a noninvasive imaging technique often used for the assessment of coronary artery disease, it can also be utilized to obtain important data about left and right ventricular function and morphology. In this review, we will discuss the clinical indications for the use of cardiac CT for ventricular analysis, review the evidence on the assessment of ventricular function compared to existing imaging modalities such cardiac MRI and echocardiography, provide a typical cardiac CT protocol for image acquisition and post-processing for ventricular analysis, and provide step-by-step instructions to acquire multiplanar cardiac views for ventricular assessment from the standard axial, coronal, and sagittal planes. Furthermore, both qualitative and quantitative assessments of ventricular function as well as sample reporting are detailed. PMID:25576407

  9. New Computer Simulations of Macular Neural Functioning

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.

    1994-01-01

    We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.

  10. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  11. Incorporation of Unreliable Information Into Photogrammetric Reconstruction for Recovery of Scale Using Non-Parametric Belief Propagation

    NASA Astrophysics Data System (ADS)

    Hollick, J.; Helmholz, P.; Belton, D.

    2016-06-01

    The creation of large photogrammetric models often encounter several difficulties in regards to geometric accuracy, scale and geolocation, especially when not using control points. Geometric accuracy can be a problem when encountering repetitive features, scale and geolocation can be challenging in GNSS denied or difficult to reach environments. Despite these challenges scale and location are often highly desirable even if only approximate, especially when the error bounds are known. Using non-parametric belief propagation we propose a method of fusing different sensor types to allow robust creation of scaled models without control points. Using this technique we scale models using only the sensor data sometimes to within 4% of their actual size even in the presence of poor GNSS coverage.

  12. Incorporating outlier detection and replacement into a non-parametric framework for movement and distortion correction of diffusion MR images.

    PubMed

    Andersson, Jesper L R; Graham, Mark S; Zsoldos, Enikő; Sotiropoulos, Stamatios N

    2016-11-01

    Despite its great potential in studying brain anatomy and structure, diffusion magnetic resonance imaging (dMRI) is marred by artefacts more than any other commonly used MRI technique. In this paper we present a non-parametric framework for detecting and correcting dMRI outliers (signal loss) caused by subject motion. Signal loss (dropout) affecting a whole slice, or a large connected region of a slice, is frequently observed in diffusion weighted images, leading to a set of unusable measurements. This is caused by bulk (subject or physiological) motion during the diffusion encoding part of the imaging sequence. We suggest a method to detect slices affected by signal loss and replace them by a non-parametric prediction, in order to minimise their impact on subsequent analysis. The outlier detection and replacement, as well as correction of other dMRI distortions (susceptibility-induced distortions, eddy currents (EC) and subject motion) are performed within a single framework, allowing the use of an integrated approach for distortion correction. Highly realistic simulations have been used to evaluate the method with respect to its ability to detect outliers (types 1 and 2 errors), the impact of outliers on retrospective correction of movement and distortion and the impact on estimation of commonly used diffusion tensor metrics, such as fractional anisotropy (FA) and mean diffusivity (MD). Data from a large imaging project studying older adults (the Whitehall Imaging sub-study) was used to demonstrate the utility of the method when applied to datasets with severe subject movement. The results indicate high sensitivity and specificity for detecting outliers and that their deleterious effects on FA and MD can be almost completely corrected. PMID:27393418

  13. ON THE ROBUSTNESS OF z = 0-1 GALAXY SIZE MEASUREMENTS THROUGH MODEL AND NON-PARAMETRIC FITS

    SciTech Connect

    Mosleh, Moein; Franx, Marijn; Williams, Rik J.

    2013-11-10

    We present the size-stellar mass relations of nearby (z = 0.01-0.02) Sloan Digital Sky Survey galaxies, for samples selected by color, morphology, Sérsic index n, and specific star formation rate. Several commonly employed size measurement techniques are used, including single Sérsic fits, two-component Sérsic models, and a non-parametric method. Through simple simulations, we show that the non-parametric and two-component Sérsic methods provide the most robust effective radius measurements, while those based on single Sérsic profiles are often overestimates, especially for massive red/early-type galaxies. Using our robust sizes, we show for all sub-samples that the mass-size relations are shallow at low stellar masses and steepen above ∼3-4 × 10{sup 10} M{sub ☉}. The mass-size relations for galaxies classified as late-type, low-n, and star-forming are consistent with each other, while blue galaxies follow a somewhat steeper relation. The mass-size relations of early-type, high-n, red, and quiescent galaxies all agree with each other but are somewhat steeper at the high-mass end than previous results. To test potential systematics at high redshift, we artificially redshifted our sample (including surface brightness dimming and degraded resolution) to z = 1 and re-fit the galaxies using single Sérsic profiles. The sizes of these galaxies before and after redshifting are consistent and we conclude that systematic effects in sizes and the size-mass relation at z ∼ 1 are negligible. Interestingly, since the poorer physical resolution at high redshift washes out bright galaxy substructures, single Sérsic fitting appears to provide more reliable and unbiased effective radius measurements at high z than for nearby, well-resolved galaxies.

  14. Computer network defense through radial wave functions

    NASA Astrophysics Data System (ADS)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  15. Computational functions in biochemical reaction networks.

    PubMed Central

    Arkin, A; Ross, J

    1994-01-01

    In prior work we demonstrated the implementation of logic gates, sequential computers (universal Turing machines), and parallel computers by means of the kinetics of chemical reaction mechanisms. In the present article we develop this subject further by first investigating the computational properties of several enzymatic (single and multiple) reaction mechanisms: we show their steady states are analogous to either Boolean or fuzzy logic gates. Nearly perfect digital function is obtained only in the regime in which the enzymes are saturated with their substrates. With these enzymatic gates, we construct combinational chemical networks that execute a given truth-table. The dynamic range of a network's output is strongly affected by "input/output matching" conditions among the internal gate elements. We find a simple mechanism, similar to the interconversion of fructose-6-phosphate between its two bisphosphate forms (fructose-1,6-bisphosphate and fructose-2,6-bisphosphate), that functions analogously to an AND gate. When the simple model is supplanted with one in which the enzyme rate laws are derived from experimental data, the steady state of the mechanism functions as an asymmetric fuzzy aggregation operator with properties akin to a fuzzy AND gate. The qualitative behavior of the mechanism does not change when situated within a large model of glycolysis/gluconeogenesis and the TCA cycle. The mechanism, in this case, switches the pathway's mode from glycolysis to gluconeogenesis in response to chemical signals of low blood glucose (cAMP) and abundant fuel for the TCA cycle (acetyl coenzyme A). Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 FIGURE 10 FIGURE 12 FIGURE 13 FIGURE 14 FIGURE 15 FIGURE 16 PMID:7948674

  16. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The

  17. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    PubMed

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. PMID:23137923

  18. A probabilistic, non-parametric framework for inter-modality label fusion.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-01-01

    Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm. PMID:24505808

  19. A non-parametric statistical analysis in the measurement of outdoor gamma exposure to the residents around Trombay.

    PubMed

    Kumar, Ajay; Singhal, R K; Preetha, J; Rupali, K; Joshi, V M; Hegde, A G; Kushwaha, H S

    2007-01-01

    During this study, non-parametric statistical methods were used to validate the measured gamma dose rate with the calculated one, around Trombay. Portable dose rate digital gamma spectrometry system (target fieldSPEC) was used for in situ measurement of external gamma (gamma) dose rate (measured) with the range of 1 nSv/h-10 Sv/h. The activity concentration of U-238, Th-232, K-40 and Cs-137 in the soil and their respective external dose-conversion factor (nSv/h/Bq/kg) was used to evaluate the gamma dose rate (calculated). Non-parametric statistical tool like Box- and -Whisker Plot, Spearman's (rho) rank Correlation coefficient, the Wilcoxon/Mann-Whitney test and chi(2) distribution test have been applied for validation. The randomness or discrete behaviour of measured and calculated dose rate was obvious from the Box- and -Whisker Plot as mean and median of the two are not equal. The inter quartile range (Q3-Q1), which explains about the dispersion of measured and calculated dose rate were also evaluated and found to be 10 and 16 microSv/y, respectively. The linear association between the order of ranks of the two dose rates was established by using Spearman's (rho) rank correlation that showed a coefficient of R = +0.90 with the intercept +1.9, whereas Pearson's correlation was observed with a coefficient of R = +0.93 with the intercept -25.6. Wilcoxon/Mann-Whitney test shows that, medians of the calculated and the measured dose rate as significantly different under the assumption of null hypothesis and measured dose rate was made to the normal distribution by applying Z-statistics. Value of chi(2) was calculated and found to be 284.95, which was very much greater than the critical value of chi(2)(0.05) = 43.77 at a degree of freedom 30, concluding that there is a highly significant difference between the measured and calculated dose rate at 5% significance level. PMID:17545658

  20. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  1. Non-parametric Single View Reconstruction of Curved Objects Using Convex Optimization

    NASA Astrophysics Data System (ADS)

    Oswald, Martin R.; Töppe, Eno; Kolev, Kalin; Cremers, Daniel

    We propose a convex optimization framework delivering intuitive and reasonable 3D meshes from a single photograph. For a given input image, the user can quickly obtain a segmentation of the object in question. Our algorithm then automatically generates an admissible closed surface of arbitrary topology without the requirement of tedious user input. Moreover we provide a tool by which the user is able to interactively modify the result afterwards through parameters and simple operations in a 2D image space. The algorithm targets a limited but relevant class of real world objects. The object silhouette and the additional user input enter a functional which can be optimized globally in a few seconds using recently developed convex relaxation techniques parallelized on state-of-the-art graphics hardware.

  2. A non-parametric method for measuring the local dark matter density

    NASA Astrophysics Data System (ADS)

    Silverwood, H.; Sivertsson, S.; Steger, P.; Read, J. I.; Bertone, G.

    2016-07-01

    We present a new method for determining the local dark matter density using kinematic data for a population of tracer stars. The Jeans equation in the z-direction is integrated to yield an equation that gives the velocity dispersion as a function of the total mass density, tracer density, and the `tilt' term that describes the coupling of vertical and radial motions. We then fit a dark matter mass profile to tracer density and velocity dispersion data to derive credible regions on the vertical dark matter density profile. Our method avoids numerical differentiation, leading to lower numerical noise, and is able to deal with the tilt term while remaining one dimensional. In this study we present the method and perform initial tests on idealized mock data. We also demonstrate the importance of dealing with the tilt term for tracers that sample ≳1 kpc above the disc plane. If ignored, this results in a systematic underestimation of the dark matter density.

  3. Investigation of the dynamic stress–strain response of compressible polymeric foam using a non-parametric analysis

    DOE PAGES

    Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.

    2016-01-25

    Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less

  4. Transit Timing Observations from Kepler: II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    SciTech Connect

    Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  5. Transit Timing Observations from Kepler. II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Borucki, William J.; Bryson, Stephen T.; Buchhave, Lars A.; Burke, Christopher J.; Caldwell, Douglas A.; Charbonneau, David; Clarke, Bruce D.; Cochran, William D.; Désert, Jean-Michel; Endl, Michael; Everett, Mark E.; Fischer, Debra A.; Gautier, Thomas N., III; Gilliland, Ron L.; Jenkins, Jon M.; Haas, Michael R.; Horch, Elliott; Howell, Steve B.; Ibrahim, Khadeejah A.; Isaacson, Howard; Koch, David G.; Latham, David W.; Li, Jie; Lucas, Philip; MacQueen, Phillip J.; Marcy, Geoffrey W.; McCauliff, Sean; Mullally, Fergal R.; Quinn, Samuel N.; Quintana, Elisa; Shporer, Avi; Still, Martin; Tenenbaum, Peter; Thompson, Susan E.; Torres, Guillermo; Twicken, Joseph D.; Wohler, Bill; Kepler Science Team

    2012-05-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  6. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    SciTech Connect

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others

    2012-05-10

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  7. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation.

    PubMed

    Häme, Yrjö; Pollari, Mika

    2012-01-01

    A novel liver tumor segmentation method for CT images is presented. The aim of this work was to reduce the manual labor and time required in the treatment planning of radiofrequency ablation (RFA), by providing accurate and automated tumor segmentations reliably. The developed method is semi-automatic, requiring only minimal user interaction. The segmentation is based on non-parametric intensity distribution estimation and a hidden Markov measure field model, with application of a spherical shape prior. A post-processing operation is also presented to remove the overflow to adjacent tissue. In addition to the conventional approach of using a single image as input data, an approach using images from multiple contrast phases was developed. The accuracy of the method was validated with two sets of patient data, and artificially generated samples. The patient data included preoperative RFA images and a public data set from "3D Liver Tumor Segmentation Challenge 2008". The method achieved very high accuracy with the RFA data, and outperformed other methods evaluated with the public data set, receiving an average overlap error of 30.3% which represents an improvement of 2.3% points to the previously best performing semi-automatic method. The average volume difference was 23.5%, and the average, the RMS, and the maximum surface distance errors were 1.87, 2.43, and 8.09 mm, respectively. The method produced good results even for tumors with very low contrast and ambiguous borders, and the performance remained high with noisy image data.

  8. Non-parametric linear regression of discrete Fourier transform convoluted chromatographic peak responses under non-ideal conditions of internal standard method.

    PubMed

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Fahmy, Ossama T; Ragab, Marwa A A

    2010-11-15

    This manuscript discusses the application of chemometrics to the handling of HPLC response data using the internal standard method (ISM). This was performed on a model mixture containing terbutaline sulphate, guaiphenesin, bromhexine HCl, sodium benzoate and propylparaben as an internal standard. Derivative treatment of chromatographic response data of analyte and internal standard was followed by convolution of the resulting derivative curves using 8-points sin x(i) polynomials (discrete Fourier functions). The response of each analyte signal, its corresponding derivative and convoluted derivative data were divided by that of the internal standard to obtain the corresponding ratio data. This was found beneficial in eliminating different types of interferences. It was successfully applied to handle some of the most common chromatographic problems and non-ideal conditions, namely: overlapping chromatographic peaks and very low analyte concentrations. For example, a significant change in the correlation coefficient of sodium benzoate, in case of overlapping peaks, went from 0.9975 to 0.9998 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. Also a significant improvement in the precision and accuracy for the determination of synthetic mixtures and dosage forms in non-ideal cases was achieved. For example, in the case of overlapping peaks guaiphenesin mean recovery% and RSD% went from 91.57, 9.83 to 100.04, 0.78 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. This work also compares the application of Theil's method, a non-parametric regression method, in handling the response ratio data, with the least squares parametric regression method, which is considered the de facto standard method used for regression. Theil's method was found to be superior to the method of least squares as it assumes that errors could occur in both x- and y-directions and

  9. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the

  10. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  11. Pair correlation function integrals: Computation and use

    NASA Astrophysics Data System (ADS)

    Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens

    2011-08-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.

  12. The Computer and Its Functions; How to Communicate with the Computer.

    ERIC Educational Resources Information Center

    Ward, Peggy M.

    A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…

  13. Basic mathematical function libraries for scientific computation

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.

  14. Computing Partial Transposes and Related Entanglement Functions

    NASA Astrophysics Data System (ADS)

    Maziero, Jonas

    2016-10-01

    The partial transpose (PT) is an important function for entanglement testing and quantification and also for the study of geometrical aspects of the quantum state space. In this article, considering general bipartite and multipartite discrete systems, explicit formulas ready for the numerical implementation of the PT and of related entanglement functions are presented and the Fortran code produced for that purpose is described. What is more, we obtain an analytical expression for the Hilbert-Schmidt entanglement of two-qudit systems and for the associated closest separable state. In contrast to previous works on this matter, we only use the properties of the PT, not applying Lagrange multipliers.

  15. Evaluation of climate change on flood event by using parametric T-test and non-parametric Mann-Kendall test in Barcelonnette basin, France

    NASA Astrophysics Data System (ADS)

    Ramesh, Azadeh; Glade, Thomas; Malet, Jean-Philippe

    2010-09-01

    The existence of a trend in hydrological and meteorological time series is detected by statistical tests. The trend analysis of hydrological and meteorological series is important to consider, because of the effects of global climate change. Parametric or non-parametric statistical tests can be used to decide whether there is a statistically significant trend. In this paper, first a homogeneity analysis was performed by using the non-parametric Bartlett test. Then, trend detection was estimated by using non-parametric Mann-Kendall test. The null hypothesis in the Mann-Kendall test is that the data are independent and randomly ordered. The result of Mann-Kendall test was compared with the parametric T-Test for finding the existence of trend. To reach this purpose, the significance of trends was analyzed on monthly data of Ubaye river in Barcelonnette watershed in southeast of France at an elevation of 1132 m (3717 ft) during the period from 1928 to 2009 bases with the nonparametric Mann-Kendall test and parametric T-Test for river discharge and for meteorological data. The result shows that a rainfall event does not necessarily have an immediate impact on discharge. Visual inspection suggests that the correlation between observations made at the same time point is not very strong. In the results of the trend tests the p-value of the discharge is slightly smaller than the p-value of the precipitation but it seems that in both there is no statistically significant trend. In statistical hypothesis testing, a test statistic is a numerical summary of a set of data that reduces the data to one or a small number of values that can be used to perform a hypothesis test. Statistical hypothesis testing is determined if there is a significant trend or not. Negative test statistics and MK test in both precipitation and discharge data indicate downward trends. As conclusion we can say extreme flood event during recent years is strongly depending on: 1) location of the city: It is

  16. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics.

    PubMed

    Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used

  17. A non-parametric method for automatic determination of P-wave and S-wave arrival times: application to local micro earthquakes

    NASA Astrophysics Data System (ADS)

    Rawles, Christopher; Thurber, Clifford

    2015-08-01

    We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set

  18. Really computing nonperturbative real time correlation functions

    NASA Astrophysics Data System (ADS)

    Bödeker, Dietrich; McLerran, Larry; Smilga, Andrei

    1995-10-01

    It has been argued by Grigoriev and Rubakov that one can simulate real time processes involving baryon number nonconservation at high temperature using real time evolution of classical equations, and summing over initial conditions with a classical thermal weight. It is known that such a naive algorithm is plagued by ultraviolet divergences. In quantum theory the divergences are regularized, but the corresponding graphs involve the contributions from the hard momentum region and also the new scale ~gT comes into play. We propose a modified algorithm which involves solving the classical equations of motion for the effective hard thermal loop Hamiltonian with an ultraviolet cutoff μ>>gT and integrating over initial conditions with a proper thermal weight. Such an algorithm should provide a determination of the infrared behavior of the real time correlation function T determining the baryon violation rate. Hopefully, the results obtained in this modified algorithm will be cutoff independent.

  19. Basis Function Sampling for Material Property Computations

    NASA Astrophysics Data System (ADS)

    Whitmer, Jonathan K.; Chiu, Chi-Cheng; Joshi, Abhijeet A.; de Pablo, Juan J.

    2014-03-01

    Wang-Landau sampling, and the associated class of flat histogram simulation methods, have been particularly successful for free energy calculations in a wide array of physical systems. Practically, the convergence of these calculations to a target free energy surface is hampered by reliance on parameters which are unknown a priori. We derive and implement a method based on orthogonal (basis) functions which is fast, parameter-free, and geometrically robust. An important feature of this method is its ability to achieve arbitrary levels of description for the free energy. It is thus ideally suited to in silico measurement of elastic moduli and other quantities related to free energy perturbations. We demonstrate the utility of such applications by applying our method to calculation of the Frank elastic constants of the Lebwohl-Lasher model.

  20. Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions.

    ERIC Educational Resources Information Center

    O'Callaghan, Brian R.

    1998-01-01

    Describes a research project that examined the effects of the Computer-Intensive Algebra (CIA) and traditional algebra curricula on students' (N=802) understanding of the function concept. Results indicate that CIA students achieved a better understanding of functions and were better at the components of modeling, interpreting, and translating.…

  1. Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient

    NASA Astrophysics Data System (ADS)

    Mari, A.; Eisert, J.

    2012-12-01

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  2. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  3. Computational approaches for rational design of proteins with novel functionalities.

    PubMed

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes.

  4. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  5. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    PubMed

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.

  6. Quantum Computing Without Wavefunctions: Time-Dependent Density Functional Theory for Universal Quantum Computation

    PubMed Central

    Tempel, David G.; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483

  7. Computational design of proteins with novel structure and functions

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Lu-Hua, Lai

    2016-01-01

    Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence-structure-function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein-protein interactions. Challenges and future prospects of this field are also discussed. Project supported by the National Basic Research Program of China (Grant No. 2015CB910300), the National High Technology Research and Development Program of China (Grant No. 2012AA020308), and the National Natural Science Foundation of China (Grant No. 11021463).

  8. Robust Computation of Morse-Smale Complexes of Bilinear Functions

    SciTech Connect

    Norgard, G; Bremer, P T

    2010-11-30

    The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.

  9. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  10. Computer program for calculating and fitting thermodynamic functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1992-01-01

    A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.

  11. Computing the hadronic vacuum polarization function by analytic continuation

    DOE PAGES

    Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; Jansen, Karl; Petschlies, Marcus; Renner, Dru B.

    2013-08-29

    We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the spacelike and timelike regions. We provide two independent demonstrations to show that this method leads to the desired hadronic vacuum polarization function in Minkowski spacetime. We present with the example of the leading-order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.

  12. Environment parameters and basic functions for floating-point computation

    NASA Technical Reports Server (NTRS)

    Brown, W. S.; Feldman, S. I.

    1978-01-01

    A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.

  13. Assessment of cardiac function: magnetic resonance and computed tomography.

    PubMed

    Greenberg, S B

    2000-10-01

    A complete cardiac study requires both anatomic and physiologic evaluation. Cardiac function can be evaluated noninvasively by magnetic resonance imaging (MRI)or ultrafast computed tomography (CT). MRI allows for evaluation of cardiac function by cine gradient echo imaging of the ventricles and flow analysis across cardiac valves and the great vessels. Cine gradient echo imaging is useful for evaluation of cardiac wall motion, ventricular volumes and ventricular mass. Flow analysis allows for measurement of velocity and flow during the cardiac cycle that reflects cardiac function. Ultrafast CT allows for measurement of cardiac indices similar to that provided by gradient echo imaging of the ventricles.

  14. A Survey of Computational Intelligence Techniques in Protein Function Prediction

    PubMed Central

    Tiwari, Arvind Kumar; Srivastava, Rajeev

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395

  15. Integrated command, control, communications and computation system functional architecture

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.; Gilbert, L. E.

    1981-01-01

    The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.

  16. Structure, function, and behaviour of computational models in systems biology

    PubMed Central

    2013-01-01

    Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research

  17. Non-parametric representation and prediction of single- and multi-shell diffusion-weighted MRI data using Gaussian processes.

    PubMed

    Andersson, Jesper L R; Sotiropoulos, Stamatios N

    2015-11-15

    Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of "Kriging". We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell.

  18. Non-parametric representation and prediction of single- and multi-shell diffusion-weighted MRI data using Gaussian processes

    PubMed Central

    Andersson, Jesper L.R.; Sotiropoulos, Stamatios N.

    2015-01-01

    Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of “Kriging”. We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell. PMID:26236030

  19. Computational design of receptor and sensor proteins with novel functions

    NASA Astrophysics Data System (ADS)

    Looger, Loren L.; Dwyer, Mary A.; Smith, James J.; Hellinga, Homme W.

    2003-05-01

    The formation of complexes between proteins and ligands is fundamental to biological processes at the molecular level. Manipulation of molecular recognition between ligands and proteins is therefore important for basic biological studies and has many biotechnological applications, including the construction of enzymes, biosensors, genetic circuits, signal transduction pathways and chiral separations. The systematic manipulation of binding sites remains a major challenge. Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, L-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or L-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally. The biological and biosensing activities of the designed receptors illustrate potential applications of computational design.

  20. Computer Code For Calculation Of The Mutual Coherence Function

    NASA Astrophysics Data System (ADS)

    Bugnolo, Dimitri S.

    1986-05-01

    We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.

  1. Computations involving differential operators and their actions on functions

    NASA Technical Reports Server (NTRS)

    Crouch, Peter E.; Grossman, Robert; Larson, Richard

    1991-01-01

    The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.

  2. Efficient quantum algorithm for computing n-time correlation functions.

    PubMed

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  3. Computational studies of the purine-functionalized graphene sheets

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmoud; Yousefi, Mohammad

    2012-10-01

    We performed a computational work to investigate the properties of functionalized graphene sheets (S) by adenine (A) and guanine (G) purine nucleobases. To achieve the purpose of this work, we examined the functionalization of armchair and zigzag tips of the S model by each of the A and G purines. The results indicated that the optimized properties for the investigated hybrid structures are different depending on the tip of functionalization and the used purine nucleobase. Moreover, the atomic level properties of the investigated structures were investigated by evaluating quadrupole coupling constants (CQ) for the atoms of the optimized structures. The remarkable trend of the CQ parameters is that the changes of atomic properties are many more significant for the functionalization of the zigzag-tip by the G nucleobase, which is in agreement with the results of the optimized properties.

  4. Computational aspects of the continuum quaternionic wave functions for hydrogen

    SciTech Connect

    Morais, J.

    2014-10-15

    Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.

  5. A hybrid method for the parallel computation of Green's functions

    SciTech Connect

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-08-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  6. A radial basis function network approach for the computation of inverse continuous time variant functions.

    PubMed

    Mayorga, René V; Carrera, Jonathan

    2007-06-01

    This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index.

  7. Segmentation of densely populated cell nuclei from confocal image stacks using 3D non-parametric shape priors.

    PubMed

    Ong, Lee-Ling S; Wang, Mengmeng; Dauwels, Justin; Asada, H Harry

    2014-01-01

    An approach to jointly estimate 3D shapes and poses of stained nuclei from confocal microscopy images, using statistical prior information, is presented. Extracting nuclei boundaries from our experimental images of cell migration is challenging due to clustered nuclei and variations in their shapes. This issue is formulated as a maximum a posteriori estimation problem. By incorporating statistical prior models of 3D nuclei shapes into level set functions, the active contour evolutions applied on the images is constrained. A 3D alignment algorithm is developed to build the training databases and to match contours obtained from the images to them. To address the issue of aligning the model over multiple clustered nuclei, a watershed-like technique is used to detect and separate clustered regions prior to active contour evolution. Our method is tested on confocal images of endothelial cells in microfluidic devices, compared with existing approaches.

  8. Computational approaches for inferring the functions of intrinsically disordered proteins

    PubMed Central

    Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter

    2015-01-01

    Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226

  9. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

  10. SOPIE: an R package for the non-parametric estimation of the off-pulse interval of a pulsar light curve

    NASA Astrophysics Data System (ADS)

    Schutte, Willem D.; Swanepoel, Jan W. H.

    2016-09-01

    An automated tool to derive the off-pulse interval of a light curve originating from a pulsar is needed. First, we derive a powerful and accurate non-parametric sequential estimation technique to estimate the off-pulse interval of a pulsar light curve in an objective manner. This is in contrast to the subjective `eye-ball' (visual) technique, and complementary to the Bayesian Block method which is currently used in the literature. The second aim involves the development of a statistical package, necessary for the implementation of our new estimation technique. We develop a statistical procedure to estimate the off-pulse interval in the presence of noise. It is based on a sequential application of p-values obtained from goodness-of-fit tests for uniformity. The Kolmogorov-Smirnov, Cramér-von Mises, Anderson-Darling and Rayleigh test statistics are applied. The details of the newly developed statistical package SOPIE (Sequential Off-Pulse Interval Estimation) are discussed. The developed estimation procedure is applied to simulated and real pulsar data. Finally, the SOPIE estimated off-pulse intervals of two pulsars are compared to the estimates obtained with the Bayesian Block method and yield very satisfactory results. We provide the code to implement the SOPIE package, which is publicly available at http://CRAN.R-project.org/package=SOPIE (Schutte).

  11. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.

  12. HOMOGENEOUS UGRIZ PHOTOMETRY FOR ACS VIRGO CLUSTER SURVEY GALAXIES: A NON-PARAMETRIC ANALYSIS FROM SDSS IMAGING

    SciTech Connect

    Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.

    2010-11-15

    We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.

  13. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results

  14. Characterizing Ipomopsis rubra (Polemoniaceae) germination under various thermal scenarios with non-parametric and semi-parametric statistical methods.

    PubMed

    Pérez, Hector E; Kettner, Keith

    2013-10-01

    Time-to-event analysis represents a collection of relatively new, flexible, and robust statistical techniques for investigating the incidence and timing of transitions from one discrete condition to another. Plant biology is replete with examples of such transitions occurring from the cellular to population levels. However, application of these statistical methods has been rare in botanical research. Here, we demonstrate the use of non- and semi-parametric time-to-event and categorical data analyses to address questions regarding seed to seedling transitions of Ipomopsis rubra propagules exposed to various doses of constant or simulated seasonal diel temperatures. Seeds were capable of germinating rapidly to >90 % at 15-25 or 22/11-29/19 °C. Optimum temperatures for germination occurred at 25 or 29/19 °C. Germination was inhibited and seed viability decreased at temperatures ≥30 or 33/24 °C. Kaplan-Meier estimates of survivor functions indicated highly significant differences in temporal germination patterns for seeds exposed to fluctuating or constant temperatures. Extended Cox regression models specified an inverse relationship between temperature and the hazard of germination. Moreover, temperature and the temperature × day interaction had significant effects on germination response. Comparisons to reference temperatures and linear contrasts suggest that summer temperatures (33/24 °C) play a significant role in differential germination responses. Similarly, simple and complex comparisons revealed that the effects of elevated temperatures predominate in terms of components of seed viability. In summary, the application of non- and semi-parametric analyses provides appropriate, powerful data analysis procedures to address various topics in seed biology and more widespread use is encouraged.

  15. Green's Function Analysis of Periodic Structures in Computational Electromagnetics

    NASA Astrophysics Data System (ADS)

    Van Orden, Derek

    2011-12-01

    Periodic structures are used widely in electromagnetic devices, including filters, waveguiding structures, and antennas. Their electromagnetic properties may be analyzed computationally by solving an integral equation, in which an unknown equivalent current distribution in a single unit cell is convolved with a periodic Green's function that accounts for the system's boundary conditions. Fast computation of the periodic Green's function is therefore essential to achieve high accuracy solutions of complicated periodic structures, including analysis of modal wave propagation and scattering from external sources. This dissertation first presents alternative spectral representations of the periodic Green's function of the Helmholtz equation for cases of linear periodic systems in 2D and 3D free space and near planarly layered media. Although there exist multiple representations of the periodic Green's function, most are not efficient in the important case where the fields are observed near the array axis. We present spectral-spatial representations for rapid calculation of the periodic Green's functions for linear periodic arrays of current sources residing in free space as well as near a planarly layered medium. They are based on the integral expansion of the periodic Green's functions in terms of the spectral parameters transverse to the array axis. These schemes are important for the rapid computation of the interaction among unit cells of a periodic array, and, by extension, the complex dispersion relations of guided waves. Extensions of this approach to planar periodic structures are discussed. With these computation tools established, we study the traveling wave properties of linear resonant arrays placed near surfaces, and examine the coupling mechanisms that lead to radiation into guided waves supported by the surface. This behavior is especially important to understand the properties of periodic structures printed on dielectric substrates, such as periodic

  16. On the Hydrodynamic Function of Sharkskin: A Computational Investigation

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Sotiropoulos, Fotis

    2014-11-01

    Denticles (placoid scales) are small structures that cover the epidermis of some sharks. The hydrodynamic function of denticles is unclear. Because they resemble riblets, they have been thought to passively reduce skin-friction-for which there is some experimental evidence. Others have experimentally shown that denticles increase skin-friction and have hypothesized that denticles act as vortex generators to delay separation. To help clarify their function, we use high-resolution large eddy and direct numerical simulations, with an immersed boundary method, to simulate flow patterns past and calculate the drag force on Mako Short Fin denticles. Simulations are carried out for the denticles placed in a canonical turbulent boundary layer as well as in the vicinity of a separation bubble. The computed results elucidate the three-dimensional structure of the flow around denticles and provide insights into the hydrodynamic function of sharkskin.

  17. Multiple von Neumann computers: an evolutionary approach to functional emergence.

    PubMed

    Suzuki, H

    1997-01-01

    A novel system composed of multiple von Neumann computers and an appropriate problem environment is proposed and simulated. Each computer has a memory to store the machine instruction program, and when a program is executed, a series of machine codes in the memory is sequentially decoded, leading to register operations in the central processing unit (CPU). By means of these operations, the computer not only can handle its generally used registers but also can read and write the environmental database. Simulation is driven by genetic algorithms (GAs) performed on the population of program memories. Mutation and crossover create program diversity in the memory, and selection facilitates the reproduction of appropriate programs. Through these evolutionary operations, advantageous combinations of machine codes are created and fixed in the population one by one, and the higher function, which enables the computer to calculate an appropriate number from the environment, finally emerges in the program memory. In the latter half of the article, the performance of GAs on this system is studied. Under different sets of parameters, the evolutionary speed, which is determined by the time until the domination of the final program, is examined and the conditions for faster evolution are clarified. At an intermediate mutation rate and at an intermediate population size, crossover helps create novel advantageous sets of machine codes and evidently accelerates optimization by GAs.

  18. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  19. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  20. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  1. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  2. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  3. Complete RNA inverse folding: computational design of functional hammerhead ribozymes

    PubMed Central

    Dotu, Ivan; Garcia-Martin, Juan Antonio; Slinger, Betty L.; Mechery, Vinodh; Meyer, Michelle M.; Clote, Peter

    2014-01-01

    Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. PMID:25209235

  4. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  5. Non-functioning adrenal adenomas discovered incidentally on computed tomography

    SciTech Connect

    Mitnick, J.S.; Bosniak, M.A.; Megibow, A.J.; Naidich, D.P.

    1983-08-01

    Eighteen patients with unilateral non-metastatic non-functioning adrenal masses were studied with computed tomography (CT). Pathological examination in cases revealed benign adrenal adenomas. The others were followed up with serial CT scans and found to show no change in tumor size over a period of six months to three years. On the basis of these findings, the authors suggest certain criteria of a benign adrenal mass, including (a) diameter less than 5 cm, (b) smooth contour, (c) well-defined margin, and (d) no change in size on follow-up. Serial CT scanning can be used as an alternative to surgery in the management of many of these patients.

  6. Enhancing the Reliability of Spectral Correlation Function with Distributed Computing

    NASA Astrophysics Data System (ADS)

    Alfaqawi, M. I.; Chebil, J.; Habaebi, M. H.; Ramli, N.; Mohamad, H.

    2013-12-01

    Various random time series used in signal processing systems are cyclostationary due to the sinusoidal carriers, pulse trains, periodic motion, or physical phenomenon. The cyclostationarity of the signal could be analysed by using the spectral correlation function (SCF). However, SCF is considered high complex due to the 2-D functionality and the required long observation time. The SCF could be computed in various methods however there are two methods used in practice such as FFT accumulation method (FAM) and strip spectral correlation algorithm (SSCA). This paper shows the benefit on the complexity and the reliability due to the workload distribution of one processor over different cooperated processors. The paper found that with increasing the reliability of the SCF, the number of the cooperated processors to achieve the half of the maximum complexity will reduce.

  7. Computation of the lattice Green function for a dislocation

    NASA Astrophysics Data System (ADS)

    Tan, Anne Marie Z.; Trinkle, Dallas R.

    2016-08-01

    Modeling isolated dislocations is challenging due to their long-ranged strain fields. Flexible boundary condition methods capture the correct long-range strain field of a defect by coupling the defect core to an infinite harmonic bulk through the lattice Green function (LGF). To improve the accuracy and efficiency of flexible boundary condition methods, we develop a numerical method to compute the LGF specifically for a dislocation geometry; in contrast to previous methods, where the LGF was computed for the perfect bulk as an approximation for the dislocation. Our approach directly accounts for the topology of a dislocation, and the errors in the LGF computation converge rapidly for edge dislocations in a simple cubic model system as well as in BCC Fe with an empirical potential. When used within the flexible boundary condition approach, the dislocation LGF relaxes dislocation core geometries in fewer iterations than when the perfect bulk LGF is used as an approximation for the dislocation, making a flexible boundary condition approach more efficient.

  8. Representing and analysing molecular and cellular function using the computer.

    PubMed

    van Helden, J; Naim, A; Mancuso, R; Eldridge, M; Wernisch, L; Gilbert, D; Wodak, S J

    2000-01-01

    Determining the biological function of a myriad of genes, and understanding how they interact to yield a living cell, is the major challenge of the post genome-sequencing era. The complexity of biological systems is such that this cannot be envisaged without the help of powerful computer systems capable of representing and analysing the intricate networks of physical and functional interactions between the different cellular components. In this review we try to provide the reader with an appreciation of where we stand in this regard. We discuss some of the inherent problems in describing the different facets of biological function, give an overview of how information on function is currently represented in the major biological databases, and describe different systems for organising and categorising the functions of gene products. In a second part, we present a new general data model, currently under development, which describes information on molecular function and cellular processes in a rigorous manner. The model is capable of representing a large variety of biochemical processes, including metabolic pathways, regulation of gene expression and signal transduction. It also incorporates taxonomies for categorising molecular entities, interactions and processes, and it offers means of viewing the information at different levels of resolution, and dealing with incomplete knowledge. The data model has been implemented in the database on protein function and cellular processes 'aMAZE' (http://www.ebi.ac.uk/research/pfbp/), which presently covers metabolic pathways and their regulation. Several tools for querying, displaying, and performing analyses on such pathways are briefly described in order to illustrate the practical applications enabled by the model.

  9. An Atomistic Statistically Effective Energy Function for Computational Protein Design.

    PubMed

    Topham, Christopher M; Barbe, Sophie; André, Isabelle

    2016-08-01

    Shortcomings in the definition of effective free-energy surfaces of proteins are recognized to be a major contributory factor responsible for the low success rates of existing automated methods for computational protein design (CPD). The formulation of an atomistic statistically effective energy function (SEEF) suitable for a wide range of CPD applications and its derivation from structural data extracted from protein domains and protein-ligand complexes are described here. The proposed energy function comprises nonlocal atom-based and local residue-based SEEFs, which are coupled using a novel atom connectivity number factor to scale short-range, pairwise, nonbonded atomic interaction energies and a surface-area-dependent cavity energy term. This energy function was used to derive additional SEEFs describing the unfolded-state ensemble of any given residue sequence based on computed average energies for partially or fully solvent-exposed fragments in regions of irregular structure in native proteins. Relative thermal stabilities of 97 T4 bacteriophage lysozyme mutants were predicted from calculated energy differences for folded and unfolded states with an average unsigned error (AUE) of 0.84 kcal mol(-1) when compared to experiment. To demonstrate the utility of the energy function for CPD, further validation was carried out in tests of its capacity to recover cognate protein sequences and to discriminate native and near-native protein folds, loop conformers, and small-molecule ligand binding poses from non-native benchmark decoys. Experimental ligand binding free energies for a diverse set of 80 protein complexes could be predicted with an AUE of 2.4 kcal mol(-1) using an additional energy term to account for the loss in ligand configurational entropy upon binding. The atomistic SEEF is expected to improve the accuracy of residue-based coarse-grained SEEFs currently used in CPD and to extend the range of applications of extant atom-based protein statistical

  10. An Atomistic Statistically Effective Energy Function for Computational Protein Design.

    PubMed

    Topham, Christopher M; Barbe, Sophie; André, Isabelle

    2016-08-01

    Shortcomings in the definition of effective free-energy surfaces of proteins are recognized to be a major contributory factor responsible for the low success rates of existing automated methods for computational protein design (CPD). The formulation of an atomistic statistically effective energy function (SEEF) suitable for a wide range of CPD applications and its derivation from structural data extracted from protein domains and protein-ligand complexes are described here. The proposed energy function comprises nonlocal atom-based and local residue-based SEEFs, which are coupled using a novel atom connectivity number factor to scale short-range, pairwise, nonbonded atomic interaction energies and a surface-area-dependent cavity energy term. This energy function was used to derive additional SEEFs describing the unfolded-state ensemble of any given residue sequence based on computed average energies for partially or fully solvent-exposed fragments in regions of irregular structure in native proteins. Relative thermal stabilities of 97 T4 bacteriophage lysozyme mutants were predicted from calculated energy differences for folded and unfolded states with an average unsigned error (AUE) of 0.84 kcal mol(-1) when compared to experiment. To demonstrate the utility of the energy function for CPD, further validation was carried out in tests of its capacity to recover cognate protein sequences and to discriminate native and near-native protein folds, loop conformers, and small-molecule ligand binding poses from non-native benchmark decoys. Experimental ligand binding free energies for a diverse set of 80 protein complexes could be predicted with an AUE of 2.4 kcal mol(-1) using an additional energy term to account for the loss in ligand configurational entropy upon binding. The atomistic SEEF is expected to improve the accuracy of residue-based coarse-grained SEEFs currently used in CPD and to extend the range of applications of extant atom-based protein statistical

  11. Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.

    PubMed

    Timmins, Amy; de Visser, Sam P

    2015-01-01

    Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates. PMID:26415843

  12. Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.

    PubMed

    Timmins, Amy; de Visser, Sam P

    2015-01-01

    Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates.

  13. Functional Connectivity’s Degenerate View of Brain Computation

    PubMed Central

    Giron, Alain; Rudrauf, David

    2016-01-01

    Brain computation relies on effective interactions between ensembles of neurons. In neuroimaging, measures of functional connectivity (FC) aim at statistically quantifying such interactions, often to study normal or pathological cognition. Their capacity to reflect a meaningful variety of patterns as expected from neural computation in relation to cognitive processes remains debated. The relative weights of time-varying local neurophysiological dynamics versus static structural connectivity (SC) in the generation of FC as measured remains unsettled. Empirical evidence features mixed results: from little to significant FC variability and correlation with cognitive functions, within and between participants. We used a unified approach combining multivariate analysis, bootstrap and computational modeling to characterize the potential variety of patterns of FC and SC both qualitatively and quantitatively. Empirical data and simulations from generative models with different dynamical behaviors demonstrated, largely irrespective of FC metrics, that a linear subspace with dimension one or two could explain much of the variability across patterns of FC. On the contrary, the variability across BOLD time-courses could not be reduced to such a small subspace. FC appeared to strongly reflect SC and to be partly governed by a Gaussian process. The main differences between simulated and empirical data related to limitations of DWI-based SC estimation (and SC itself could then be estimated from FC). Above and beyond the limited dynamical range of the BOLD signal itself, measures of FC may offer a degenerate representation of brain interactions, with limited access to the underlying complexity. They feature an invariant common core, reflecting the channel capacity of the network as conditioned by SC, with a limited, though perhaps meaningful residual variability. PMID:27736900

  14. Towards computational prediction of microRNA function and activity

    PubMed Central

    Ulitsky, Igor; Laurent, Louise C.; Shamir, Ron

    2010-01-01

    While it has been established that microRNAs (miRNAs) play key roles throughout development and are dysregulated in many human pathologies, the specific processes and pathways regulated by individual miRNAs are mostly unknown. Here, we use computational target predictions in order to automatically infer the processes affected by human miRNAs. Our approach improves upon standard statistical tools by addressing specific characteristics of miRNA regulation. Our analysis is based on a novel compendium of experimentally verified miRNA-pathway and miRNA-process associations that we constructed, which can be a useful resource by itself. Our method also predicts novel miRNA-regulated pathways, refines the annotation of miRNAs for which only crude functions are known, and assigns differential functions to miRNAs with closely related sequences. Applying our approach to groups of co-expressed genes allows us to identify miRNAs and genomic miRNA clusters with functional importance in specific stages of early human development. A full list of the predicted mRNA functions is available at http://acgt.cs.tau.ac.il/fame/. PMID:20576699

  15. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    NASA Astrophysics Data System (ADS)

    Constantinescu, C. C.; Yoder, K. K.; Kareken, D. A.; Bouman, C. A.; O'Connor, S. J.; Normandin, M. D.; Morris, E. D.

    2008-03-01

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest & activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (FDA(t)) and the change in binding potential (ΔBP). The veracity of the FDA(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) ΔBP should decline with increasing DA peak time, (2) ΔBP should increase as the strength of the temporal correlation between FDA(t) and the free raclopride (FRAC(t)) curve increases, (3) ΔBP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [11C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover FDA(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the FDA(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of FDA(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  16. Computer-Based Screening of Functional Conformers of Proteins

    PubMed Central

    Montiel Molina, Héctor Marlosti; Millán-Pacheco, César; Pastor, Nina; del Rio, Gabriel

    2008-01-01

    A long-standing goal in biology is to establish the link between function, structure, and dynamics of proteins. Considering that protein function at the molecular level is understood by the ability of proteins to bind to other molecules, the limited structural data of proteins in association with other bio-molecules represents a major hurdle to understanding protein function at the structural level. Recent reports show that protein function can be linked to protein structure and dynamics through network centrality analysis, suggesting that the structures of proteins bound to natural ligands may be inferred computationally. In the present work, a new method is described to discriminate protein conformations relevant to the specific recognition of a ligand. The method relies on a scoring system that matches critical residues with central residues in different structures of a given protein. Central residues are the most traversed residues with the same frequency in networks derived from protein structures. We tested our method in a set of 24 different proteins and more than 260,000 structures of these in the absence of a ligand or bound to it. To illustrate the usefulness of our method in the study of the structure/dynamics/function relationship of proteins, we analyzed mutants of the yeast TATA-binding protein with impaired DNA binding. Our results indicate that critical residues for an interaction are preferentially found as central residues of protein structures in complex with a ligand. Thus, our scoring system effectively distinguishes protein conformations relevant to the function of interest. PMID:18463705

  17. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  18. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  19. Computer Modeling of Protocellular Functions: Peptide Insertion in Membranes

    NASA Technical Reports Server (NTRS)

    Rodriquez-Gomez, D.; Darve, E.; Pohorille, A.

    2006-01-01

    Lipid vesicles became the precursors to protocells by acquiring the capabilities needed to survive and reproduce. These include transport of ions, nutrients and waste products across cell walls and capture of energy and its conversion into a chemically usable form. In modem organisms these functions are carried out by membrane-bound proteins (about 30% of the genome codes for this kind of proteins). A number of properties of alpha-helical peptides suggest that their associations are excellent candidates for protobiological precursors of proteins. In particular, some simple a-helical peptides can aggregate spontaneously and form functional channels. This process can be described conceptually by a three-step thermodynamic cycle: 1 - folding of helices at the water-membrane interface, 2 - helix insertion into the lipid bilayer and 3 - specific interactions of these helices that result in functional tertiary structures. Although a crucial step, helix insertion has not been adequately studied because of the insolubility and aggregation of hydrophobic peptides. In this work, we use computer simulation methods (Molecular Dynamics) to characterize the energetics of helix insertion and we discuss its importance in an evolutionary context. Specifically, helices could self-assemble only if their interactions were sufficiently strong to compensate the unfavorable Free Energy of insertion of individual helices into membranes, providing a selection mechanism for protobiological evolution.

  20. Assessing Executive Function Using a Computer Game: Computational Modeling of Cognitive Processes

    PubMed Central

    Hagler, Stuart; Jimison, Holly B.; Pavel, Misha

    2014-01-01

    Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper Trail-Making Test (TMT) – known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home. PMID:25014944

  1. Computing the Partition Function for Kinetically Trapped RNA Secondary Structures

    PubMed Central

    Lorenz, William A.; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server

  2. Computing the partition function for kinetically trapped RNA secondary structures.

    PubMed

    Lorenz, William A; Clote, Peter

    2011-01-01

    An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in O(n3) time and O(n2) space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures--indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy

  3. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  4. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  5. Computational Effective Fault Detection by Means of Signature Functions

    PubMed Central

    Baranski, Przemyslaw; Pietrzak, Piotr

    2016-01-01

    The paper presents a computationally effective method for fault detection. A system’s responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system’s response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine’s life. PMID:26949942

  6. Imaging local brain function with emission computed tomography

    SciTech Connect

    Kuhl, D.E.

    1984-03-01

    Positron emission tomography (PET) using /sup 18/F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed.

  7. Computing black hole partition functions from quasinormal modes

    DOE PAGES

    Arnold, Peter; Szepietowski, Phillip; Vaman, Diana

    2016-07-07

    We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less

  8. Computing black hole partition functions from quasinormal modes

    NASA Astrophysics Data System (ADS)

    Arnold, Peter; Szepietowski, Phillip; Vaman, Diana

    2016-07-01

    We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. We then discuss the application of such techniques to more complicated spacetimes.

  9. A computer vision based candidate for functional balance test.

    PubMed

    Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath

    2015-08-01

    Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.

  10. Chemical Visualization of Boolean Functions: A Simple Chemical Computer

    NASA Astrophysics Data System (ADS)

    Blittersdorf, R.; Müller, J.; Schneider, F. W.

    1995-08-01

    We present a chemical realization of the Boolean functions AND, OR, NAND, and NOR with a neutralization reaction carried out in three coupled continuous flow stirred tank reactors (CSTR). Two of these CSTR's are used as input reactors, the third reactor marks the output. The chemical reaction is the neutralization of hydrochloric acid (HCl) with sodium hydroxide (NaOH) in the presence of phenolphtalein as an indicator, which is red in alkaline solutions and colorless in acidic solutions representing the two binary states 1 and 0, respectively. The time required for a "chemical computation" is determined by the flow rate of reactant solutions into the reactors since the neutralization reaction itself is very fast. While the acid flow to all reactors is equal and constant, the flow rate of NaOH solution controls the states of the input reactors. The connectivities between the input and output reactors determine the flow rate of NaOH solution into the output reactor, according to the chosen Boolean function. Thus the state of the output reactor depends on the states of the input reactors.

  11. Non-parametric analysis of the rest-frame UV sizes and morphological disturbance amongst L* galaxies at 4 < z < 8

    NASA Astrophysics Data System (ADS)

    Curtis-Lake, E.; McLure, R. J.; Dunlop, J. S.; Rogers, A. B.; Targett, T.; Dekel, A.; Ellis, R. S.; Faber, S. M.; Ferguson, H. C.; Grogin, N. A.; Kocevski, D. D.; Koekemoer, A. M.; Lai, K.; Mármol-Queraltó, E.; Robertson, B. E.

    2016-03-01

    We present the results of a study investigating the sizes and morphologies of redshift 4 < z < 8 galaxies in the CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) GOODS-S (Great Observatories Origins Deep Survey southern field), HUDF (Hubble Ultra-Deep Field) and HUDF parallel fields. Based on non-parametric measurements and incorporating a careful treatment of measurement biases, we quantify the typical size of galaxies at each redshift as the peak of the lognormal size distribution, rather than the arithmetic mean size. Parametrizing the evolution of galaxy half-light radius as r50 ∝ (1 + z)n, we find n = -0.20 ± 0.26 at bright UV-luminosities (0.3L*(z = 3) < L < L*) and n = -0.47 ± 0.62 at faint luminosities (0.12L* < L < 0.3L*). Furthermore, simulations based on artificially redshifting our z ˜ 4 galaxy sample show that we cannot reject the null hypothesis of no size evolution. We show that this result is caused by a combination of the size-dependent completeness of high-redshift galaxy samples and the underestimation of the sizes of the largest galaxies at a given epoch. To explore the evolution of galaxy morphology we first compare asymmetry measurements to those from a large sample of simulated single Sérsic profiles, in order to robustly categorize galaxies as either `smooth' or `disturbed'. Comparing the disturbed fraction amongst bright (M1500 ≤ -20) galaxies at each redshift to that obtained by artificially redshifting our z ˜ 4 galaxy sample, while carefully matching the size and UV-luminosity distributions, we find no clear evidence for evolution in galaxy morphology over the redshift interval 4 < z < 8. Therefore, based on our results, a bright (M1500 ≤ -20) galaxy at z ˜ 6 is no more likely to be measured as `disturbed' than a comparable galaxy at z ˜ 4, given the current observational constraints.

  12. Analysis of long term meteorological trends in the middle and lower Indus Basin of Pakistan-A non-parametric statistical approach

    NASA Astrophysics Data System (ADS)

    Ahmad, Waqas; Fatima, Aamira; Awan, Usman Khalid; Anwar, Arif

    2014-11-01

    The Indus basin of Pakistan is vulnerable to climate change which would directly affect the livelihoods of poor people engaged in irrigated agriculture. The situation could be worse in middle and lower part of this basin which occupies 90% of the irrigated area. The objective of this research is to analyze the long term meteorological trends in the middle and lower parts of Indus basin of Pakistan. We used monthly data from 1971 to 2010 and applied non-parametric seasonal Kendal test for trend detection in combination with seasonal Kendall slope estimator to quantify the magnitude of trends. The meteorological parameters considered were mean maximum and mean minimum air temperature, and rainfall from 12 meteorological stations located in the study region. We examined the reliability and spatial integrity of data by mass-curve analysis and spatial correlation matrices, respectively. Analysis was performed for four seasons (spring-March to May, summer-June to August, fall-September to November and winter-December to February). The results show that max. temperature has an average increasing trend of magnitude + 0.16, + 0.03, 0.0 and + 0.04 °C/decade during all the four seasons, respectively. The average trend of min. temperature during the four seasons also increases with magnitude of + 0.29, + 0.12, + 0.36 and + 0.36 °C/decade, respectively. Persistence of the increasing trend is more pronounced in the min. temperature as compared to the max. temperature on annual basis. Analysis of rainfall data has not shown any noteworthy trend during winter, fall and on annual basis. However during spring and summer season, the rainfall trends vary from - 1.15 to + 0.93 and - 3.86 to + 2.46 mm/decade, respectively. It is further revealed that rainfall trends during all seasons are statistically non-significant. Overall the study area is under a significant warming trend with no changes in rainfall.

  13. HANOIPC3: a computer program to evaluate executive functions.

    PubMed

    Guevara, M A; Rizo, L; Ruiz-Díaz, M; Hernández-González, M

    2009-08-01

    This article describes a computer program (HANOIPC3) based on the Tower of Hanoi game that, by analyzing a series of parameters during execution, allows a fast and accurate evaluation of data related to certain executive functions, especially planning, organizing and problem-solving. This computerized version has only one level of difficulty based on the use of 3 disks, but it stipulates an additional rule: only one disk may be moved at a time, and only to an adjacent peg (i.e., no peg can be skipped over). In the original version--without this stipulation--the minimum number of movements required to complete the task is 7, but under the conditions of this computerized version this increases to 26. HANOIPC3 has three important advantages: (1) it allows a researcher or clinician to modify the rules by adding or removing certain conditions, thus augmenting the utility and flexibility in test execution and the interpretation of results; (2) it allows to provide on-line feedback to subjects about their execution; and, (3) it creates a specific file to store the scores that correspond to the parameters obtained during trials. The parameters that can be measured include: latencies (time taken for each movement, measured in seconds), total test time, total number of movements, and the number of correct and incorrect movements. The efficacy and adaptability of this program has been confirmed. PMID:19303660

  14. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    PubMed

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    -stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872

  15. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus)

    PubMed Central

    Carr, Steven M.; Duggan, Ana T.; Stenson, Garry B.; Marshall, H. Dawn

    2015-01-01

    -stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872

  16. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    PubMed

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    -stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed.

  17. On computing closed forms for summations. [polynomials and rational functions

    NASA Technical Reports Server (NTRS)

    Moenck, R.

    1977-01-01

    The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.

  18. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  19. Challenges in computational studies of enzyme structure, function and dynamics.

    PubMed

    Carvalho, Alexandra T P; Barrozo, Alexandre; Doron, Dvir; Kilshtain, Alexandra Vardi; Major, Dan Thomas; Kamerlin, Shina Caroline Lynn

    2014-11-01

    In this review we give an overview of the field of Computational enzymology. We start by describing the birth of the field, with emphasis on the work of the 2013 chemistry Nobel Laureates. We then present key features of the state-of-the-art in the field, showing what theory, accompanied by experiments, has taught us so far about enzymes. We also briefly describe computational methods, such as quantum mechanics-molecular mechanics approaches, reaction coordinate treatment, and free energy simulation approaches. We finalize by discussing open questions and challenges.

  20. Dose spread functions in computed tomography: A Monte Carlo study

    PubMed Central

    Boone, John M.

    2009-01-01

    Purpose: Current CT dosimetry employing CTDI methodology has come under fire in recent years, partially in response to the increasing width of collimated x-ray fields in modern CT scanners. This study was conducted to provide a better understanding of the radiation dose distributions in CT. Methods: Monte Carlo simulations were used to evaluate radiation dose distributions along the z axis arising from CT imaging in cylindrical phantoms. Mathematical cylinders were simulated with compositions of water, polymethyl methacrylate (PMMA), and polyethylene. Cylinder diameters from 10 to 50 cm were studied. X-ray spectra typical of several CT manufacturers (80, 100, 120, and 140 kVp) were used. In addition to no bow tie filter, the head and body bow tie filters from modern General Electric and Siemens CT scanners were evaluated. Each cylinder was divided into three concentric regions of equal volume such that the energy deposited is proportional to dose for each region. Two additional dose assessment regions, central and edge locations 10 mm in diameter, were included for comparisons to CTDI100 measurements. Dose spread functions (DSFs) were computed for a wide number of imaging parameters. Results: DSFs generally exhibit a biexponential falloff from the z=0 position. For a very narrow primary beam input (⪡1 mm), DSFs demonstrated significant low amplitude long range scatter dose tails. For body imaging conditions (30 cm diameter in water), the DSF at the center showed ∼160 mm at full width at tenth maximum (FWTM), while at the edge the FWTM was ∼80 mm. Polyethylene phantoms exhibited wider DSFs than PMMA or water, as did higher tube voltages in any material. The FWTM were 80, 180, and 250 mm for 10, 30, and 50 cm phantom diameters, respectively, at the center in water at 120 kVp with a typical body bow tie filter. Scatter to primary dose ratios (SPRs) increased with phantom diameter from 4 at the center (1 cm diameter) for a 16 cm diameter cylinder to ∼12.5 for a

  1. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    NASA Technical Reports Server (NTRS)

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  2. A Functional Analytic Approach to Computer-Interactive Mathematics

    ERIC Educational Resources Information Center

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…

  3. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  4. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  5. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    NASA Astrophysics Data System (ADS)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  6. Using computational models to relate structural and functional brain connectivity

    PubMed Central

    Hlinka, Jaroslav; Coombes, Stephen

    2012-01-01

    Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson–Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph–theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics. PMID:22805059

  7. Introduction to Classical Density Functional Theory by a Computational Experiment

    ERIC Educational Resources Information Center

    Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel

    2014-01-01

    We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…

  8. The computational foundations of time dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Whitfield, James

    2014-03-01

    The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. Since a quantum computer can efficiently produce such time-dependent densities, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds. Further, we find that systems do not immediately become non-representable but rather become ill-representable as one approaches this boundary. A representability parameter is defined in our work which quantifies the distance to the boundary of representability and the computational difficulty of finding the Kohn-Sham system.

  9. Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.

    ERIC Educational Resources Information Center

    Snow, Donald R.

    1989-01-01

    Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)

  10. Improvement in protein functional site prediction by distinguishing structural and functional constraints on protein family evolution using computational design.

    PubMed

    Cheng, Gong; Qian, Bin; Samudrala, Ram; Baker, David

    2005-01-01

    The prediction of functional sites in newly solved protein structures is a challenge for computational structural biology. Most methods for approaching this problem use evolutionary conservation as the primary indicator of the location of functional sites. However, sequence conservation reflects not only evolutionary selection at functional sites to maintain protein function, but also selection throughout the protein to maintain the stability of the folded state. To disentangle sequence conservation due to protein functional constraints from sequence conservation due to protein structural constraints, we use all atom computational protein design methodology to predict sequence profiles expected under solely structural constraints, and to compute the free energy difference between the naturally occurring amino acid and the lowest free energy amino acid at each position. We show that functional sites are more likely than non-functional sites to have computed sequence profiles which differ significantly from the naturally occurring sequence profiles and to have residues with sub-optimal free energies, and that incorporation of these two measures improves sequence based prediction of protein functional sites. The combined sequence and structure based functional site prediction method has been implemented in a publicly available web server.

  11. COMPUTATIONAL STRATEGIES FOR THE DESIGN OF NEW ENZYMATIC FUNCTIONS

    PubMed Central

    Świderek, K; Tuñón, I.; Moliner, V.; Bertran, J.

    2015-01-01

    In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and retro-aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies. PMID:25797438

  12. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  13. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    SciTech Connect

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott

    2013-01-01

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.

  14. Computational properties of three-term recurrence relations for Kummer functions

    NASA Astrophysics Data System (ADS)

    Deaño, Alfredo; Segura, Javier; Temme, Nico M.

    2010-01-01

    Several three-term recurrence relations for confluent hypergeometric functions are analyzed from a numerical point of view. Minimal and dominant solutions for complex values of the variable z are given, derived from asymptotic estimates of the Whittaker functions with large parameters. The Laguerre polynomials and the regular Coulomb wave functions are studied as particular cases, with numerical examples of their computation.

  15. Spaceborne computer executive routine functional design specification. Volume 1: Functional design of a flight computer executive program for the reusable shuttle

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1971-01-01

    A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.

  16. Toward high-resolution computational design of helical membrane protein structure and function

    PubMed Central

    Barth, Patrick; Senes, Alessandro

    2016-01-01

    The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630

  17. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  18. A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry

    ERIC Educational Resources Information Center

    Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan

    2013-01-01

    A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…

  19. Effects of Computer versus Paper Administration of an Adult Functional Writing Assessment

    ERIC Educational Resources Information Center

    Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young

    2011-01-01

    This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…

  20. Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...

  1. A Functional Specification for a Programming Language for Computer Aided Learning Applications.

    ERIC Educational Resources Information Center

    National Research Council of Canada, Ottawa (Ontario).

    In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…

  2. Method reduces computer time for smoothing functions and derivatives through ninth order polynomials

    NASA Technical Reports Server (NTRS)

    Glauz, R. D.; Wilgus, C. A.

    1969-01-01

    Analysis presented is an efficient technique to adjust previously calculated orthogonal polynomial coefficients for an odd number of equally spaced data points. The adjusting technique derivation is for a ninth order polynomial. It reduces computer time for smoothing functions.

  3. Functions and Requirements and Specifications for Replacement of the Computer Automated Surveillance System (CASS)

    SciTech Connect

    SCAIEF, C.C.

    1999-12-16

    This functions, requirements and specifications document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of the system to replace the Computer Automated Surveillance System (CASS) alarm monitoring.

  4. Basis Function Sampling: A New Paradigm for Material Property Computation

    NASA Astrophysics Data System (ADS)

    Whitmer, Jonathan K.; Chiu, Chi-cheng; Joshi, Abhijeet A.; de Pablo, Juan J.

    2014-11-01

    Wang-Landau sampling, and the associated class of flat histogram simulation methods have been remarkably helpful for calculations of the free energy in a wide variety of physical systems. Practically, convergence of these calculations to a target free energy surface is hampered by reliance on parameters which are unknown a priori. Here, we derive and implement a method built upon orthogonal functions which is fast, parameter-free, and (importantly) geometrically robust. The method is shown to be highly effective in achieving convergence. An important feature of this method is its ability to attain arbitrary levels of description for the free energy. It is thus ideally suited to in silico measurement of elastic moduli and other material properties related to free energy perturbations. We demonstrate the utility of such applications by applying our method to calculate the Frank elastic constants of the Lebwohl-Lasher model of liquid crystals.

  5. Computational complexity of time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Whitfield, J. D.; Yung, M.-H.; Tempel, D. G.; Boixo, S.; Aspuru-Guzik, A.

    2014-08-01

    Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds.

  6. Computer-Aided Evaluation of Liver Functional Assessment

    PubMed Central

    Lesmo, Leonardo; Saitta, Lorenza; Torasso, Piero

    1980-01-01

    This paper describes the organization of a computerized system whose purpose is to ascertain the presence of functional impairments in the liver and to evaluate their seriousness. The system is composed of categorical rules and decision procedures. The symptoms and the anamnestic data of a given patient trigger the categorical rules which constrain the set of hypothesizable impairments. This set of hypotheses acts as a focus of attention of the system by allowing the selection of the bioclinical tests more relevant to determine the seriousness of those impairments. The outcome of the selected tests are input to the decision procedures operating on the basis of fuzzy relations which allow a quantitative evaluation of the seriousness of the hypothesized impairments. Whereas the categorical rules have been built on the basis of the a-priori knowledge of the physicians, the parameters of the fuzzy relations have been learned automatically by means of a fuzzy inference procedure.

  7. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  8. Astrocytes, Synapses and Brain Function: A Computational Approach

    NASA Astrophysics Data System (ADS)

    Nadkarni, Suhita

    2006-03-01

    Modulation of synaptic reliability is one of the leading mechanisms involved in long- term potentiation (LTP) and long-term depression (LTD) and therefore has implications in information processing in the brain. A recently discovered mechanism for modulating synaptic reliability critically involves recruitments of astrocytes - star- shaped cells that outnumber the neurons in most parts of the central nervous system. Astrocytes until recently were thought to be subordinate cells merely participating in supporting neuronal functions. New evidence, however, made available by advances in imaging technology has changed the way we envision the role of these cells in synaptic transmission and as modulator of neuronal excitability. We put forward a novel mathematical framework based on the biophysics of the bidirectional neuron-astrocyte interactions that quantitatively accounts for two distinct experimental manifestation of recruitment of astrocytes in synaptic transmission: a) transformation of a low fidelity synapse transforms into a high fidelity synapse and b) enhanced postsynaptic spontaneous currents when astrocytes are activated. Such a framework is not only useful for modeling neuronal dynamics in a realistic environment but also provides a conceptual basis for interpreting experiments. Based on this modeling framework, we explore the role of astrocytes for neuronal network behavior such as synchrony and correlations and compare with experimental data from cultured networks.

  9. Computation of pair distribution functions and three-dimensional densities with a reduced variance principle

    NASA Astrophysics Data System (ADS)

    Borgis, Daniel; Assaraf, Roland; Rotenberg, Benjamin; Vuilleumier, Rodolphe

    2013-12-01

    No fancy statistical objects here, we go back to the computation of one of the most basic and fundamental quantities in the statistical mechanics of fluids, namely the pair distribution functions. Those functions are usually computed in molecular simulations by using histogram techniques. We show here that they can be estimated using a global information on the instantaneous forces acting on the particles, and that this leads to a reduced variance compared to the standard histogram estimators. The technique is extended successfully to the computation of three-dimensional solvent densities around tagged molecular solutes, quantities that are noisy and very long to converge, using histograms.

  10. Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions

    PubMed Central

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non

  11. A mesh-decoupled height function method for computing interface curvature

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Desjardins, Olivier

    2015-01-01

    In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.

  12. PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS

    PubMed Central

    Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.

    2013-01-01

    Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390

  13. Functional Competency Development Model for Academic Personnel Based on International Professional Qualification Standards in Computing Field

    ERIC Educational Resources Information Center

    Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon

    2016-01-01

    This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…

  14. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  15. Functional Specifications for Computer Aided Training Systems Development and Management (CATSDM) Support Functions. Final Report.

    ERIC Educational Resources Information Center

    Hughes, John; And Others

    This report provides a description of a Computer Aided Training System Development and Management (CATSDM) environment based on state-of-the-art hardware and software technology, and including recommendations for off the shelf systems to be utilized as a starting point in addressing the particular systematic training and instruction design and…

  16. Toward high-resolution computational design of the structure and function of helical membrane proteins.

    PubMed

    Barth, Patrick; Senes, Alessandro

    2016-06-01

    The computational design of α-helical membrane proteins is still in its infancy but has already made great progress. De novo design allows stable, specific and active minimal oligomeric systems to be obtained. Computational reengineering can improve the stability and function of naturally occurring membrane proteins. Currently, the major hurdle for the field is the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress. PMID:27273630

  17. Locating and computing in parallel all the simple roots of special functions using PVM

    NASA Astrophysics Data System (ADS)

    Plagianakos, V. P.; Nousis, N. K.; Vrahatis, M. N.

    2001-08-01

    An algorithm is proposed for locating and computing in parallel and with certainty all the simple roots of any twice continuously differentiable function in any specific interval. To compute with certainty all the roots, the proposed method is heavily based on the knowledge of the total number of roots within the given interval. To obtain this information we use results from topological degree theory and, in particular, the Kronecker-Picard approach. This theory gives a formula for the computation of the total number of roots of a system of equations within a given region, which can be computed in parallel. With this tool in hand, we construct a parallel procedure for the localization and isolation of all the roots by dividing the given region successively and applying the above formula to these subregions until the final domains contain at the most one root. The subregions with no roots are discarded, while for the rest a modification of the well-known bisection method is employed for the computation of the contained root. The new aspect of the present contribution is that the computation of the total number of zeros using the Kronecker-Picard integral as well as the localization and computation of all the roots is performed in parallel using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of varied architectures. The proposed algorithm has large granularity and low synchronization, and is robust. It has been implemented and tested and our experience is that it can massively compute with certainty all the roots in a certain interval. Performance information from massive computations related to a recently proposed conjecture due to Elbert (this issue, J. Comput. Appl. Math. 133 (2001) 65-83) is reported.

  18. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    SciTech Connect

    Druskin, V.; Lee, Ping; Knizhnerman, L.

    1996-12-31

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  19. Renormalization group improved computation of correlation functions in theories with nontrivial phase diagram

    NASA Astrophysics Data System (ADS)

    Codello, Alessandro; Tonero, Alberto

    2016-07-01

    We present a simple and consistent way to compute correlation functions in interacting theories with nontrivial phase diagram. As an example we show how to consistently compute the four-point function in three dimensional Z2 -scalar theories. The idea is to perform the path integral by weighting the momentum modes that contribute to it according to their renormalization group (RG) relevance, i.e. we weight each mode according to the value of the running couplings at that scale. In this way, we are able to encode in a loop computation the information regarding the RG trajectory along which we are integrating. We show that depending on the initial condition, or initial point in the phase diagram, we obtain different behaviors of the four-point function at the endpoint of the flow.

  20. On computational algorithms for real-valued continuous functions of several variables.

    PubMed

    Sprecher, David

    2014-11-01

    The subject of this paper is algorithms for computing superpositions of real-valued continuous functions of several variables based on space-filling curves. The prototypes of these algorithms were based on Kolmogorov's dimension-reducing superpositions (Kolmogorov, 1957). Interest in these grew significantly with the discovery of Hecht-Nielsen that a version of Kolmogorov's formula has an interpretation as a feedforward neural network (Hecht-Nielse, 1987). These superpositions were constructed with devil's staircase-type functions to answer a question in functional complexity, rather than become computational algorithms, and their utility as an efficient computational tool turned out to be limited by the characteristics of space-filling curves that they determined. After discussing the link between the algorithms and these curves, this paper presents two algorithms for the case of two variables: one based on space-filling curves with worked out coding, and the Hilbert curve (Hilbert, 1891).

  1. Performance of computational tools in evaluating the functional impact of laboratory-induced amino acid mutations.

    PubMed

    Gray, Vanessa E; Kukurba, Kimberly R; Kumar, Sudhir

    2012-08-15

    Site-directed mutagenesis is frequently used by scientists to investigate the functional impact of amino acid mutations in the laboratory. Over 10,000 such laboratory-induced mutations have been reported in the UniProt database along with the outcomes of functional assays. Here, we explore the performance of state-of-the-art computational tools (Condel, PolyPhen-2 and SIFT) in correctly annotating the function-altering potential of 10,913 laboratory-induced mutations from 2372 proteins. We find that computational tools are very successful in diagnosing laboratory-induced mutations that elicit significant functional change in the laboratory (up to 92% accuracy). But, these tools consistently fail in correctly annotating laboratory-induced mutations that show no functional impact in the laboratory assays. Therefore, the overall accuracy of computational tools for laboratory-induced mutations is much lower than that observed for the naturally occurring human variants. We tested and rejected the possibilities that the preponderance of changes to alanine and the presence of multiple base-pair mutations in the laboratory were the reasons for the observed discordance between the performance of computational tools for natural and laboratory mutations. Instead, we discover that the laboratory-induced mutations occur predominately at the highly conserved positions in proteins, where the computational tools have the lowest accuracy of correct prediction for variants that do not impact function (neutral). Therefore, the comparisons of experimental-profiling results with those from computational predictions need to be sensitive to the evolutionary conservation of the positions harboring the amino acid change. PMID:22685075

  2. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  3. Use of global functions for improvement in efficiency of nonlinear analysis. [in computer structural displacement estimation

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Stehlin, P.; Brogan, F. A.

    1981-01-01

    A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.

  4. The Krigifier: A Procedure for Generating Pseudorandom Nonlinear Objective Functions for Computational Experimentation

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.

    1999-01-01

    Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.

  5. Monte Carlo computation of the spectral density function in the interacting scalar field theory

    NASA Astrophysics Data System (ADS)

    Abbasi, Navid; Davody, Ali

    2015-12-01

    We study the ϕ4 field theory in d = 4. Using bold diagrammatic Monte Carlo method, we solve the Schwinger-Dyson equations and find the spectral density function of the theory beyond the weak coupling regime. We then compare our result with the one obtained from the perturbation theory. At the end, we utilize our Monte Carlo result to find the vertex function as the basis for the computation of the physical scattering amplitudes.

  6. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    SciTech Connect

    Ranken, D.; George, J.

    1993-12-31

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  7. A fast computation method for MUSIC spectrum function based on circular arrays

    NASA Astrophysics Data System (ADS)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  8. Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Li, Johnson

    2013-01-01

    The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…

  9. Integrating Computer Software into the Functional Mathematics Curriculum: A Diagnostic Approach.

    ERIC Educational Resources Information Center

    Prince George's County Public Schools, Upper Marlboro, MD.

    This curriculum guide was written to provide information on the skills covered in the Maryland Functional Math Test (MFMT) and to outline a process which will allow teachers to fully integrate computer software into their instruction. The materials produced in this directory are designed to assist mild to moderately handicapped students who will…

  10. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  11. A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function

    ERIC Educational Resources Information Center

    Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.; Blemker, Silvia S.

    2015-01-01

    Purpose: This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method: We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy…

  12. PuFT: Computer-Assisted Program for Pulmonary Function Tests.

    ERIC Educational Resources Information Center

    Boyle, Joseph

    1983-01-01

    PuFT computer program (Microsoft Basic) is designed to help in understanding/interpreting pulmonary function tests (PFT). The program provides predicted values for common PFT after entry of patient data, calculates/plots graph simulating force vital capacity (FVC), and allows observations of effects on predicted PFT values and FVC curve when…

  13. Maple (Computer Algebra System) in Teaching Pre-Calculus: Example of Absolute Value Function

    ERIC Educational Resources Information Center

    Tuluk, Güler

    2014-01-01

    Modules in Computer Algebra Systems (CAS) make Mathematics interesting and easy to understand. The present study focused on the implementation of the algebraic, tabular (numerical), and graphical approaches used for the construction of the concept of absolute value function in teaching mathematical content knowledge along with Maple 9. The study…

  14. Computer generation of symbolic network functions - A new theory and implementation.

    NASA Technical Reports Server (NTRS)

    Alderson, G. E.; Lin, P.-M.

    1972-01-01

    A new method is presented for obtaining network functions in which some, none, or all of the network elements are represented by symbolic parameters (i.e., symbolic network functions). Unlike the topological tree enumeration or signal flow graph methods generally used to derive symbolic network functions, the proposed procedure employs fast, efficient, numerical-type algorithms to determine the contribution of those network branches that are not represented by symbolic parameters. A computer program called NAPPE (for Network Analysis Program using Parameter Extractions) and incorporating all of the concepts discussed has been written. Several examples illustrating the usefulness and efficiency of NAPPE are presented.

  15. On computation and use of Fourier coefficients for associated Legendre functions

    NASA Astrophysics Data System (ADS)

    Gruber, Christian; Abrykosov, Oleh

    2016-06-01

    The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.

  16. How to Compute Green's Functions for Entire Mass Trajectories Within Krylov Solvers

    NASA Astrophysics Data System (ADS)

    Glässner, Uwe; Güsken, Stephan; Lippert, Thomas; Ritzenhöfer, Gero; Schilling, Klaus; Frommer, Andreas

    The availability of efficient Krylov subspace solvers plays a vital role in the solution of a variety of numerical problems in computational science. Here we consider lattice field theory. We present a new general numerical method to compute many Green's functions for complex non-singular matrices within one iteration process. Our procedure applies to matrices of structure A = D - m, with m proportional to the unit matrix, and can be integrated within any Krylov subspace solver. We can compute the derivatives x(n) of the solution vector x with respect to the parameter m and construct the Taylor expansion of x around m. We demonstrate the advantages of our method using a minimal residual solver. Here the procedure requires one intermediate vector for each Green's function to compute. As real-life example, we determine a mass trajectory of the Wilson fermion matrix for lattice QCD. Here we find that we can obtain Green's functions at all masses ≥ m at the price of one inversion at mass m.

  17. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  18. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  19. Understanding entangled cerebral networks: a prerequisite for restoring brain function with brain-computer interfaces.

    PubMed

    Mandonnet, Emmanuel; Duffau, Hugues

    2014-01-01

    Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved.

  20. Redox Biology: Computational Approaches to the Investigation of Functional Cysteine Residues

    PubMed Central

    Marino, Stefano M.

    2011-01-01

    Abstract Cysteine (Cys) residues serve many functions, such as catalysis, stabilization of protein structure through disulfides, metal binding, and regulation of protein function. Cys residues are also subject to numerous post-translational modifications. In recent years, various computational tools aiming at classifying and predicting different functional categories of Cys have been developed, particularly for structural and catalytic Cys. On the other hand, given complexity of the subject, bioinformatics approaches have been less successful for the investigation of regulatory Cys sites. In this review, we introduce different functional categories of Cys residues. For each category, an overview of state-of-the-art bioinformatics methods and tools is provided, along with examples of successful applications and potential limitations associated with each approach. Finally, we discuss Cys-based redox switches, which modify the view of distinct functional categories of Cys in proteins. Antioxid. Redox Signal. 15, 135–146. PMID:20812876

  1. Non-parametric cell-based photometric proxies for galaxy morphology: methodology and application to the morphologically defined star formation-stellar mass relation of spiral galaxies in the local universe

    NASA Astrophysics Data System (ADS)

    Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Robotham, A. S. G.; Seibert, M.; Kelvin, L. S.

    2014-02-01

    We present a non-parametric cell-based method of selecting highly pure and largely complete samples of spiral galaxies using photometric and structural parameters as provided by standard photometric pipelines and simple shape fitting algorithms. The performance of the method is quantified for different parameter combinations, using purely human-based classifications as a benchmark. The discretization of the parameter space allows a markedly superior selection than commonly used proxies relying on a fixed curve or surface of separation. Moreover, we find structural parameters derived using passbands longwards of the g band and linked to older stellar populations, especially the stellar mass surface density μ* and the r-band effective radius re, to perform at least equally well as parameters more traditionally linked to the identification of spirals by means of their young stellar populations, e.g. UV/optical colours. In particular, the distinct bimodality in the parameter μ*, consistent with expectations of different evolutionary paths for spirals and ellipticals, represents an often overlooked yet powerful parameter in differentiating between spiral and non-spiral/elliptical galaxies. We use the cell-based method for the optical parameter set including re in combination with the Sérsic index n and the i-band magnitude to investigate the intrinsic specific star formation rate-stellar mass relation (ψ*-M*) for a morphologically defined volume-limited sample of local Universe spiral galaxies. The relation is found to be well described by ψ _* ∝ M_*^{-0.5} over the range of 109.5 ≤ M* ≤ 1011 M⊙ with a mean interquartile range of 0.4 dex. This is somewhat steeper than previous determinations based on colour-selected samples of star-forming galaxies, primarily due to the inclusion in the sample of red quiescent discs.

  2. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  3. Computational Perspectives into Plasmepsins Structure—Function Relationship: Implications to Inhibitors Design

    PubMed Central

    Gil L., Alejandro; Valiente, Pedro A.; Pascutti, Pedro G.; Pons, Tirso

    2011-01-01

    The development of efficient and selective antimalariais remains a challenge for the pharmaceutical industry. The aspartic proteases plasmepsins, whose inhibition leads to parasite death, are classified as targets for the design of potent drugs. Combinatorial synthesis is currently being used to generate inhibitor libraries for these enzymes, and together with computational methodologies have been demonstrated capable for the selection of lead compounds. The high structural flexibility of plasmepsins, revealed by their X-ray structures and molecular dynamics simulations, made even more complicated the prediction of putative binding modes, and therefore, the use of common computational tools, like docking and free-energy calculations. In this review, we revised the computational strategies utilized so far, for the structure-function relationship studies concerning the plasmepsin family, with special focus on the recent advances in the improvement of the linear interaction estimation (LIE) method, which is one of the most successful methodologies in the evaluation of plasmepsin-inhibitor binding affinity. PMID:21760810

  4. Computational perspectives into plasmepsins structure-function relationship: implications to inhibitors design.

    PubMed

    Gil L, Alejandro; Valiente, Pedro A; Pascutti, Pedro G; Pons, Tirso

    2011-01-01

    The development of efficient and selective antimalariais remains a challenge for the pharmaceutical industry. The aspartic proteases plasmepsins, whose inhibition leads to parasite death, are classified as targets for the design of potent drugs. Combinatorial synthesis is currently being used to generate inhibitor libraries for these enzymes, and together with computational methodologies have been demonstrated capable for the selection of lead compounds. The high structural flexibility of plasmepsins, revealed by their X-ray structures and molecular dynamics simulations, made even more complicated the prediction of putative binding modes, and therefore, the use of common computational tools, like docking and free-energy calculations. In this review, we revised the computational strategies utilized so far, for the structure-function relationship studies concerning the plasmepsin family, with special focus on the recent advances in the improvement of the linear interaction estimation (LIE) method, which is one of the most successful methodologies in the evaluation of plasmepsin-inhibitor binding affinity. PMID:21760810

  5. Time Utility Functions for Modeling and Evaluating Resource Allocations in a Heterogeneous Computing System

    SciTech Connect

    Briceno, Luis Diego; Khemka, Bhavesh; Siegel, Howard Jay; Maciejewski, Anthony A; Groer, Christopher S; Koenig, Gregory A; Okonski, Gene D; Poole, Stephen W

    2011-01-01

    This study considers a heterogeneous computing system and corresponding workload being investigated by the Extreme Scale Systems Center (ESSC) at Oak Ridge National Laboratory (ORNL). The ESSC is part of a collaborative effort between the Department of Energy (DOE) and the Department of Defense (DoD) to deliver research, tools, software, and technologies that can be integrated, deployed, and used in both DOE and DoD environments. The heterogeneous system and workload described here are representative of a prototypical computing environment being studied as part of this collaboration. Each task can exhibit a time-varying importance or utility to the overall enterprise. In this system, an arriving task has an associated priority and precedence. The priority is used to describe the importance of a task, and precedence is used to describe how soon the task must be executed. These two metrics are combined to create a utility function curve that indicates how valuable it is for the system to complete a task at any given moment. This research focuses on using time-utility functions to generate a metric that can be used to compare the performance of different resource schedulers in a heterogeneous computing system. The contributions of this paper are: (a) a mathematical model of a heterogeneous computing system where tasks arrive dynamically and need to be assigned based on their priority, precedence, utility characteristic class, and task execution type, (b) the use of priority and precedence to generate time-utility functions that describe the value a task has at any given time, (c) the derivation of a metric based on the total utility gained from completing tasks to measure the performance of the computing environment, and (d) a comparison of the performance of resource allocation heuristics in this environment.

  6. Coal-seismic, desktop computer programs in BASIC; Part 6, Develop rms velocity functions and apply mute and normal movement

    USGS Publications Warehouse

    Hasbrouck, W.P.

    1983-01-01

    Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.

  7. Structure, dynamics, and function of the monooxygenase P450 BM-3: insights from computer simulations studies

    NASA Astrophysics Data System (ADS)

    Roccatano, Danilo

    2015-07-01

    The monooxygenase P450 BM-3 is a NADPH-dependent fatty acid hydroxylase enzyme isolated from soil bacterium Bacillus megaterium. As a pivotal member of cytochrome P450 superfamily, it has been intensely studied for the comprehension of structure-dynamics-function relationships in this class of enzymes. In addition, due to its peculiar properties, it is also a promising enzyme for biochemical and biomedical applications. However, despite the efforts, the full understanding of the enzyme structure and dynamics is not yet achieved. Computational studies, particularly molecular dynamics (MD) simulations, have importantly contributed to this endeavor by providing new insights at an atomic level regarding the correlations between structure, dynamics, and function of the protein. This topical review summarizes computational studies based on MD simulations of the cytochrome P450 BM-3 and gives an outlook on future directions.

  8. A comparison of computational methods and algorithms for the complex gamma function

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1974-01-01

    A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.

  9. Method, systems, and computer program products for implementing function-parallel network firewall

    DOEpatents

    Fulp, Errin W.; Farley, Ryan J.

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  10. A deconvolution function for single photon emission computed tomography with constant attenuation

    SciTech Connect

    Tomitani, T.

    1986-02-01

    A shift-invariant spatial deconvolution function for single-photon-emission computerized tomography with constant attenuation is presented. Image reconstruction algorithm is similar to conventional convolution-back-projection algorithm except that exponential weight is applied in backprojection process. The deconvolution function was obtained as a solution of a generalized Schlomilch's integral equation. A method to solve the integral equation is described briefly. The present deconvolution function is incorporated with frequency roll-off and image resolution can be preset. At the extreme of ideal image reconstruction, the deconvolution function is identical to that deduced by Kim et al. and its Fourier transform was proved to be identical to the filter deduced by Tretiak and Delaney and Gullburg and Budinger. Variance of the reconstructed image was analyzed and some numerical results were given. The algorithm was tested with computer simulation.

  11. Liver Function After Irradiation Based on Computed Tomographic Portal Vein Perfusion Imaging

    SciTech Connect

    Cao Yue Pan, Charlie; Balter, James M.; Platt, Joel F.; Francis, Isaac R.; Knol, James A.; Normolle, Daniel; Ben-Josef, Edgar; Haken, Randall K. ten; Lawrence, Theodore S.

    2008-01-01

    Purpose: To determine whether individual and regional liver sensitivity to radiation could be assessed by measuring liver perfusion during a course of treatment using dynamic contrast-enhanced computed tomography scanning. Methods and Materials: Patients with intrahepatic cancer undergoing conformal radiotherapy underwent dynamic contrast-enhanced computed tomography (to measure perfusion distribution) and an indocyanine extraction study (to measure liver function) before, during, and 1 month after treatment. We hoped to determine whether the residual functioning liver (i.e., those regions showing portal vein perfusion) could be used to predict overall liver function after irradiation. Results: Radiation doses from 45 to 84 Gy resulted in undetectable regional portal vein perfusion 1 month after treatment. The volume of each liver with undetectable portal vein perfusion ranged from 0 to 39% and depended both on the patient's sensitivity and on dose distribution. There was a significant correlation between indocyanine green clearance and the mean of the estimated portal vein perfusion in the functional liver parenchyma (p < 0.001). Conclusion: This study reveals substantial individual variability in the sensitivity of the liver to irradiation. In addition, these findings suggest that hepatic perfusion imaging may be a marker for liver function and has the potential to be a tool for individualizing therapy.

  12. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  13. Clinical Validation of 4-Dimensional Computed Tomography Ventilation With Pulmonary Function Test Data

    SciTech Connect

    Brennan, Douglas; Schubert, Leah; Diot, Quentin; Castillo, Richard; Castillo, Edward; Guerrero, Thomas; Martel, Mary K.; Linderman, Derek; Gaspar, Laurie E.; Miften, Moyed; Kavanagh, Brian D.; Vinogradskiy, Yevgeniy

    2015-06-01

    Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients with pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT

  14. Computer-Based Cognitive Training for Executive Functions after Stroke: A Systematic Review

    PubMed Central

    van de Ven, Renate M.; Murre, Jaap M. J.; Veltman, Dick J.; Schmand, Ben A.

    2016-01-01

    Background: Stroke commonly results in cognitive impairments in working memory, attention, and executive function, which may be restored with appropriate training programs. Our aim was to systematically review the evidence for computer-based cognitive training of executive dysfunctions. Methods: Studies were included if they concerned adults who had suffered stroke or other types of acquired brain injury, if the intervention was computer training of executive functions, and if the outcome was related to executive functioning. We searched in MEDLINE, PsycINFO, Web of Science, and The Cochrane Library. Study quality was evaluated based on the CONSORT Statement. Treatment effect was evaluated based on differences compared to pre-treatment and/or to a control group. Results: Twenty studies were included. Two were randomized controlled trials that used an active control group. The other studies included multiple baselines, a passive control group, or were uncontrolled. Improvements were observed in tasks similar to the training (near transfer) and in tasks dissimilar to the training (far transfer). However, these effects were not larger in trained than in active control groups. Two studies evaluated neural effects and found changes in both functional and structural connectivity. Most studies suffered from methodological limitations (e.g., lack of an active control group and no adjustment for multiple testing) hampering differentiation of training effects from spontaneous recovery, retest effects, and placebo effects. Conclusions: The positive findings of most studies, including neural changes, warrant continuation of research in this field, but only if its methodological limitations are addressed. PMID:27148007

  15. Using computational fluid dynamics to test functional and ecological hypotheses in fossil taxa

    NASA Astrophysics Data System (ADS)

    Rahman, Imran

    2016-04-01

    Reconstructing how ancient organisms moved and fed is a major focus of study in palaeontology. Traditionally, this has been hampered by a lack of objective data on the functional morphology of extinct species, especially those without a clear modern analogue. However, cutting-edge techniques for characterizing specimens digitally and in three dimensions, coupled with state-of-the-art computer models, now provide a robust framework for testing functional and ecological hypotheses even in problematic fossil taxa. One such approach is computational fluid dynamics (CFD), a method for simulating fluid flows around objects that has primarily been applied to complex engineering-design problems. Here, I will present three case studies of CFD applied to fossil taxa, spanning a range of specimen sizes, taxonomic groups and geological ages. First, I will show how CFD enabled a rigorous test of hypothesized feeding modes in an enigmatic Ediacaran organism with three-fold symmetry, revealing previously unappreciated complexity of pre-Cambrian ecosystems. Second, I will show how CFD was used to evaluate hydrodynamic performance and feeding in Cambrian stem-group echinoderms, shedding light on the probable feeding strategy of the latest common ancestor of all deuterostomes. Third, I will show how CFD allowed us to explore the link between form and function in Mesozoic ichthyosaurs. These case studies serve to demonstrate the enormous potential of CFD for addressing long-standing hypotheses for a variety of fossil taxa, opening up an exciting new avenue in palaeontological studies of functional morphology.

  16. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2001-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  17. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients. Revised

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2002-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  18. Systematic construction of density functionals based on matrix product state computations

    NASA Astrophysics Data System (ADS)

    Lubasch, Michael; Fuks, Johanna I.; Appel, Heiko; Rubio, Angel; Cirac, J. Ignacio; Bañuls, Mari-Carmen

    2016-08-01

    We propose a systematic procedure for the approximation of density functionals in density functional theory that consists of two parts. First, for the efficient approximation of a general density functional, we introduce an efficient ansatz whose non-locality can be increased systematically. Second, we present a fitting strategy that is based on systematically increasing a reasonably chosen set of training densities. We investigate our procedure in the context of strongly correlated fermions on a one-dimensional lattice in which we compute accurate training densities with the help of matrix product states. Focusing on the exchange-correlation energy, we demonstrate how an efficient approximation can be found that includes and systematically improves beyond the local density approximation. Importantly, this systematic improvement is shown for target densities that are quite different from the training densities.

  19. Effective electron displacements: A tool for time-dependent density functional theory computational spectroscopy

    SciTech Connect

    Guido, Ciro A. Cortona, Pietro; Adamo, Carlo

    2014-03-14

    We extend our previous definition of the metric Δr for electronic excitations in the framework of the time-dependent density functional theory [C. A. Guido, P. Cortona, B. Mennucci, and C. Adamo, J. Chem. Theory Comput. 9, 3118 (2013)], by including a measure of the difference of electronic position variances in passing from occupied to virtual orbitals. This new definition, called Γ, permits applications in those situations where the Δr-index is not helpful: transitions in centrosymmetric systems and Rydberg excitations. The Γ-metric is then extended by using the Natural Transition Orbitals, thus providing an intuitive picture of how locally the electron density changes during the electronic transitions. Furthermore, the Γ values give insight about the functional performances in reproducing different type of transitions, and allow one to define a “confidence radius” for GGA and hybrid functionals.

  20. Computing the three-point correlation function of galaxies in O(N^2) time

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2015-12-01

    We present an algorithm that computes the multipole coefficients of the galaxy three-point correlation function (3PCF) without explicitly considering triplets of galaxies. Rather, centring on each galaxy in the survey, it expands the radially binned density field in spherical harmonics and combines these to form the multipoles without ever requiring the relative angle between a pair about the central. This approach scales with number and number density in the same way as the two-point correlation function, allowing run-times that are comparable, and 500 times faster than a naive triplet count. It is exact in angle and easily handles edge correction. We demonstrate the algorithm on the LasDamas SDSS-DR7 mock catalogues, computing an edge corrected 3PCF out to 90 Mpc h-1 in under an hour on modest computing resources. We expect this algorithm will render it possible to obtain the large-scale 3PCF for upcoming surveys such as Euclid, Large Synoptic Survey Telescope (LSST), and Dark Energy Spectroscopic Instrument.

  1. Temporal Expression of Peripheral Blood Leukocyte Biomarkers in a Macaca fascicularis Infection Model of Tuberculosis; Comparison with Human Datasets and Analysis with Parametric/Non-parametric Tools for Improved Diagnostic Biomarker Identification

    PubMed Central

    Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham

    2016-01-01

    A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons

  2. Computing light statistics in heterogeneous media based on a mass weighted probability density function method.

    PubMed

    Jenny, Patrick; Mourad, Safer; Stamm, Tobias; Vöge, Markus; Simon, Klaus

    2007-08-01

    Based on the transport theory, we present a modeling approach to light scattering in turbid material. It uses an efficient and general statistical description of the material's scattering and absorption behavior. The model estimates the spatial distribution of intensity and the flow direction of radiation, both of which are required, e.g., for adaptable predictions of the appearance of colors in halftone prints. This is achieved by employing a computational particle method, which solves a model equation for the probability density function of photon positions and propagation directions. In this framework, each computational particle represents a finite probability of finding a photon in a corresponding state, including properties like wavelength. Model evaluations and verifications conclude the discussion.

  3. A computational theory of hippocampal function, and tests of the theory: new developments.

    PubMed

    Kesner, Raymond P; Rolls, Edmund T

    2015-01-01

    The aims of the paper are to update Rolls' quantitative computational theory of hippocampal function and the predictions it makes about the different subregions (dentate gyrus, CA3 and CA1), and to examine behavioral and electrophysiological data that address the functions of the hippocampus and particularly its subregions. Based on the computational proposal that the dentate gyrus produces sparse representations by competitive learning and via the mossy fiber pathway forces new representations on the CA3 during learning (encoding), it has been shown behaviorally that the dentate gyrus supports spatial pattern separation during learning. Based on the computational proposal that CA3-CA3 autoassociative networks are important for episodic memory, it has been shown behaviorally that the CA3 supports spatial rapid one-trial learning, learning of arbitrary associations where space is a component, pattern completion, spatial short-term memory, and spatial sequence learning by associations formed between successive items. The concept that the CA1 recodes information from CA3 and sets up associatively learned backprojections to neocortex to allow subsequent retrieval of information to neocortex, is consistent with findings on consolidation. Behaviorally, the CA1 is implicated in processing temporal information as shown by investigations requiring temporal order pattern separation and associations across time; and computationally this could involve associations in CA1 between object and timing information that have their origins in the lateral and medial entorhinal cortex respectively. The perforant path input from the entorhinal cortex to DG is implicated in learning, to CA3 in retrieval from CA3, and to CA1 in retrieval after longer time intervals ("intermediate-term memory") and in the temporal sequence memory for objects. PMID:25446947

  4. Computer-aided analyses of transport protein sequences: gleaning evidence concerning function, structure, biogenesis, and evolution.

    PubMed Central

    Saier, M H

    1994-01-01

    Three-dimensional structures have been elucidated for very few integral membrane proteins. Computer methods can be used as guides for estimation of solute transport protein structure, function, biogenesis, and evolution. In this paper the application of currently available computer programs to over a dozen distinct families of transport proteins is reviewed. The reliability of sequence-based topological and localization analyses and the importance of sequence and residue conservation to structure and function are evaluated. Evidence concerning the nature and frequency of occurrence of domain shuffling, splicing, fusion, deletion, and duplication during evolution of specific transport protein families is also evaluated. Channel proteins are proposed to be functionally related to carriers. It is argued that energy coupling to transport was a late occurrence, superimposed on preexisting mechanisms of solute facilitation. It is shown that several transport protein families have evolved independently of each other, employing different routes, at different times in evolutionary history, to give topologically similar transmembrane protein complexes. The possible significance of this apparent topological convergence is discussed. PMID:8177172

  5. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  6. Boolean Combinations of Implicit Functions for Model Clipping in Computer-Assisted Surgical Planning

    PubMed Central

    2016-01-01

    This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets’ implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module. PMID:26751685

  7. Computing single step operators of logic programming in radial basis function neural networks

    SciTech Connect

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  8. Boolean Combinations of Implicit Functions for Model Clipping in Computer-Assisted Surgical Planning.

    PubMed

    Zhan, Qiqin; Chen, Xiaojun

    2016-01-01

    This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets' implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module.

  9. Talking while Computing in Groups: The Not-so-Private Functions of Computational Private Speech in Mathematical Discussions

    ERIC Educational Resources Information Center

    Zahner, William; Moschkovich, Judit

    2010-01-01

    Students often voice computations during group discussions of mathematics problems. Yet, this type of private speech has received little attention from mathematics educators or researchers. In this article, we use excerpts from middle school students' group mathematical discussions to illustrate and describe "computational private speech." We…

  10. An effective method to verify line and point spread functions measured in computed tomography

    SciTech Connect

    Ohkubo, Masaki; Wada, Sinichi; Matsumoto, Toru; Nishizawa, Kanae

    2006-08-15

    This study describes an effective method for verifying line spread function (LSF) and point spread function (PSF) measured in computed tomography (CT). The CT image of an assumed object function is known to be calculable using LSF or PSF based on a model for the spatial resolution in a linear imaging system. Therefore, the validities of LSF and PSF would be confirmed by comparing the computed images with the images obtained by scanning phantoms corresponding to the object function. Differences between computed and measured images will depend on the accuracy of the LSF and PSF used in the calculations. First, we measured LSF in our scanner, and derived the two-dimensional PSF in the scan plane from the LSF. Second, we scanned the phantom including uniform cylindrical objects parallel to the long axis of a patient's body (z direction). Measured images of such a phantom were characterized according to the spatial resolution in the scan plane, and did not depend on the spatial resolution in the z direction. Third, images were calculated by two-dimensionally convolving the true object as a function of space with the PSF. As a result of comparing computed images with measured ones, good agreement was found and was demonstrated by image subtraction. As a criterion for evaluating quantitatively the overall differences of images, we defined the normalized standard deviation (SD) in the differences between computed and measured images. These normalized SDs were less than 5.0% (ranging from 1.3% to 4.8%) for three types of image reconstruction kernels and for various diameters of cylindrical objects, indicating the high accuracy of PSF and LSF that resulted in successful measurements. Further, we also obtained another LSF utilizing an inappropriate manner, and calculated the images as above. This time, the computed images did not agree with the measured ones. The normalized SDs were 6.0% or more (ranging from 6.0% to 13.8%), indicating the inaccuracy of the PSF and LSF. We

  11. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    PubMed

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  12. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    PubMed Central

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  13. Computer simulation on the cooperation of functional molecules during the early stages of evolution.

    PubMed

    Ma, Wentao; Hu, Jiming

    2012-01-01

    It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with "micro-resolution" at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a "habitat-shift" into bulk water. Second, the protocell stage started with a substage of "pseudo-protocells", with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent "true-protocell" substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the emergence of single

  14. Computational principles of syntax in the regions specialized for language: integrating theoretical linguistics and functional neuroimaging

    PubMed Central

    Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.

    2013-01-01

    The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957

  15. Computational principles of syntax in the regions specialized for language: integrating theoretical linguistics and functional neuroimaging.

    PubMed

    Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L

    2013-01-01

    The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.

  16. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  17. Distinct Quantitative Computed Tomography Emphysema Patterns Are Associated with Physiology and Function in Smokers

    PubMed Central

    San José Estépar, Raúl; Mendoza, Carlos S.; Hersh, Craig P.; Laird, Nan; Crapo, James D.; Lynch, David A.; Silverman, Edwin K.; Washko, George R.

    2013-01-01

    Rationale: Emphysema occurs in distinct pathologic patterns, but little is known about the epidemiologic associations of these patterns. Standard quantitative measures of emphysema from computed tomography (CT) do not distinguish between distinct patterns of parenchymal destruction. Objectives: To study the epidemiologic associations of distinct emphysema patterns with measures of lung-related physiology, function, and health care use in smokers. Methods: Using a local histogram-based assessment of lung density, we quantified distinct patterns of low attenuation in 9,313 smokers in the COPDGene Study. To determine if such patterns provide novel insights into chronic obstructive pulmonary disease epidemiology, we tested for their association with measures of physiology, function, and health care use. Measurements and Main Results: Compared with percentage of low-attenuation area less than −950 Hounsfield units (%LAA-950), local histogram-based measures of distinct CT low-attenuation patterns are more predictive of measures of lung function, dyspnea, quality of life, and health care use. These patterns are strongly associated with a wide array of measures of respiratory physiology and function, and most of these associations remain highly significant (P < 0.005) after adjusting for %LAA-950. In smokers without evidence of chronic obstructive pulmonary disease, the mild centrilobular disease pattern is associated with lower FEV1 and worse functional status (P < 0.005). Conclusions: Measures of distinct CT emphysema patterns provide novel information about the relationship between emphysema and key measures of physiology, physical function, and health care use. Measures of mild emphysema in smokers with preserved lung function can be extracted from CT scans and are significantly associated with functional measures. PMID:23980521

  18. Krylov-space algorithms for time-dependent Hartree-Fock and density functional computations

    SciTech Connect

    Chernyak, Vladimir; Schulz, Michael F.; Mukamel, Shaul; Tretiak, Sergei; Tsiper, Eugene V.

    2000-07-01

    A fast, low memory cost, Krylov-space-based algorithm is proposed for the diagonalization of large Hamiltonian matrices required in time-dependent Hartree-Fock (TDHF) and adiabatic time-dependent density-functional theory (TDDFT) computations of electronic excitations. A deflection procedure based on the symplectic structure of the TDHF equations is introduced and its capability to find higher eigenmodes of the linearized TDHF operator for a given numerical accuracy is demonstrated. The algorithm may be immediately applied to the formally-identical adiabatic TDDFT equations. (c) 2000 American Institute of Physics.

  19. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  20. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-08-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  1. Computed Ranking-Hugoniot relations for hexanitrostilbene and hexanitrohexaazaisowurtzitane via density functional theory based molecular dynamics

    NASA Astrophysics Data System (ADS)

    Wixom, Ryan; Mattsson, Ann; Mattsson, Thomas

    2011-06-01

    Density Functional Theory (DFT) has become an in-dispensable tool for understanding the behavior of matter under extreme conditions, for example confirming experimental findings into the TPa regime and amending experimental data for constructing wide-range equations of state (EOS). The ability to perform high-fidelity calculations is even more important for cases where experiments are impossible to perform, dangerous, and/or prohibitively expensive. We will present computed shock properties for hexanitrostilbene and hexanitrohexaazaisowurtzitane, making comparisons with experimental shock data or diamond anvil cell data, where available. Credibility of the results and proposed methods for validation will be discussed.

  2. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions

    SciTech Connect

    Brown, James Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  3. Understanding entangled cerebral networks: a prerequisite for restoring brain function with brain-computer interfaces

    PubMed Central

    Mandonnet, Emmanuel; Duffau, Hugues

    2014-01-01

    Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved. PMID:24834030

  4. Functional Priorities, Assistive Technology, and Brain-Computer Interfaces after Spinal Cord Injury

    PubMed Central

    Collinger, Jennifer L.; Boninger, Michael L.; Bruns, Tim M.; Curley, Kenneth; Wang, Wei; Weber, Douglas J.

    2012-01-01

    Spinal cord injury often impacts a person’s ability to perform critical activities of daily living and can have a negative impact on their quality of life. Assistive technology aims to bridge this gap to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies, like brain-computer interfaces (BCIs), are developed. In a survey study of fifty-seven veterans with spinal cord injury who were participating in the National Veterans Wheelchair Games, we found that restoration of bladder/bowel control, walking, and arm/hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of individuals in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would be willing to consider surgery to implant a BCI even though non-invasiveness was a high priority design requirement. This survey demonstrates the interest of individuals with spinal cord injury in receiving and contributing to the design of BCI. PMID:23760996

  5. Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury.

    PubMed

    Collinger, Jennifer L; Boninger, Michael L; Bruns, Tim M; Curley, Kenneth; Wang, Wei; Weber, Douglas J

    2013-01-01

    Spinal cord injury (SCI) often affects a person's ability to perform critical activities of daily living and can negatively affect his or her quality of life. Assistive technology aims to bridge this gap in order to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies such as brain-computer interfaces (BCIs) are developed. In a survey study of 57 veterans with SCI participating in the 2010 National Veterans Wheelchair Games, we found that restoration of bladder and bowel control, walking, and arm and hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of participants in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would consider surgery to implant a BCI even though noninvasiveness was a high-priority design requirement. This survey demonstrates the interest of individuals with SCI in receiving and contributing to the design of BCIs.

  6. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    SciTech Connect

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-01-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth.

  7. Functional assessment of coronary artery disease by intravascular ultrasound and computational fluid dynamics simulation.

    PubMed

    Carrizo, Sebastián; Xie, Xinzhou; Peinado-Peinado, Rafael; Sánchez-Recalde, Angel; Jiménez-Valero, Santiago; Galeote-Garcia, Guillermo; Moreno, Raúl

    2014-10-01

    Clinical trials have shown that functional assessment of coronary stenosis by fractional flow reserve (FFR) improves clinical outcomes. Intravascular ultrasound (IVUS) complements conventional angiography, and is a powerful tool to assess atherosclerotic plaques and to guide percutaneous coronary intervention (PCI). Computational fluid dynamics (CFD) simulation represents a novel method for the functional assessment of coronary flow. A CFD simulation can be calculated from the data normally acquired by IVUS images. A case of coronary heart disease studied with FFR and IVUS, before and after PCI, is presented. A three-dimensional model was constructed based on IVUS images, to which CFD was applied. A discussion of the literature concerning the clinical utility of CFD simulation is provided. PMID:25441999

  8. Function and dynamics of macromolecular complexes explored by integrative structural and computational biology.

    PubMed

    Purdy, Michael D; Bennett, Brad C; McIntire, William E; Khan, Ali K; Kasson, Peter M; Yeager, Mark

    2014-08-01

    Three vignettes exemplify the potential of combining EM and X-ray crystallographic data with molecular dynamics (MD) simulation to explore the architecture, dynamics and functional properties of multicomponent, macromolecular complexes. The first two describe how EM and X-ray crystallography were used to solve structures of the ribosome and the Arp2/3-actin complex, which enabled MD simulations that elucidated functional dynamics. The third describes how EM, X-ray crystallography, and microsecond MD simulations of a GPCR:G protein complex were used to explore transmembrane signaling by the β-adrenergic receptor. Recent technical advancements in EM, X-ray crystallography and computational simulation create unprecedented synergies for integrative structural biology to reveal new insights into heretofore intractable biological systems.

  9. Study of space shuttle orbiter system management computer function. Volume 1: Analysis, baseline design

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A system analysis of the shuttle orbiter baseline system management (SM) computer function is performed. This analysis results in an alternative SM design which is also described. The alternative design exhibits several improvements over the baseline, some of which are increased crew usability, improved flexibility, and improved growth potential. The analysis consists of two parts: an application assessment and an implementation assessment. The former is concerned with the SM user needs and design functional aspects. The latter is concerned with design flexibility, reliability, growth potential, and technical risk. The system analysis is supported by several topical investigations. These include: treatment of false alarms, treatment of off-line items, significant interface parameters, and a design evaluation checklist. An in-depth formulation of techniques, concepts, and guidelines for design of automated performance verification is discussed.

  10. Functional assessment of coronary artery disease by intravascular ultrasound and computational fluid dynamics simulation.

    PubMed

    Carrizo, Sebastián; Xie, Xinzhou; Peinado-Peinado, Rafael; Sánchez-Recalde, Angel; Jiménez-Valero, Santiago; Galeote-Garcia, Guillermo; Moreno, Raúl

    2014-10-01

    Clinical trials have shown that functional assessment of coronary stenosis by fractional flow reserve (FFR) improves clinical outcomes. Intravascular ultrasound (IVUS) complements conventional angiography, and is a powerful tool to assess atherosclerotic plaques and to guide percutaneous coronary intervention (PCI). Computational fluid dynamics (CFD) simulation represents a novel method for the functional assessment of coronary flow. A CFD simulation can be calculated from the data normally acquired by IVUS images. A case of coronary heart disease studied with FFR and IVUS, before and after PCI, is presented. A three-dimensional model was constructed based on IVUS images, to which CFD was applied. A discussion of the literature concerning the clinical utility of CFD simulation is provided.

  11. Management of Liver Cancer Argon-helium Knife Therapy with Functional Computer Tomography Perfusion Imaging.

    PubMed

    Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie

    2016-02-01

    The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p < 0.05 to all). Significant decrease in hepatic artery perfusion was also observed in pericancerous liver tissue, but other parameters kept constant. CT perfusion imaging is able to detect decrease in blood perfusion of liver cancer post-argon-helium knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy.

  12. A computer adaptive testing approach for assessing physical functioning in children and adolescents.

    PubMed

    Haley, Stephen M; Ni, Pengsheng; Fragala-Pinkham, Maria A; Skrinar, Alison M; Corzo, Deyanira

    2005-02-01

    The purpose of this article is to demonstrate: (1) the accuracy and (2) the reduction in amount of time and effort in assessing physical functioning (self-care and mobility domains) of children and adolescents using computer-adaptive testing (CAT). A CAT algorithm selects questions directly tailored to the child's ability level, based on previous responses. Using a CAT algorithm, a simulation study was used to determine the number of items necessary to approximate the score of a full-length assessment. We built simulated CAT (5-, 10-, 15-, and 20-item versions) for self-care and mobility domains and tested their accuracy in a normative sample (n=373; 190 males, 183 females; mean age 6y 11mo [SD 4y 2m], range 4mo to 14y 11mo) and a sample of children and adolescents with Pompe disease (n=26; 21 males, 5 females; mean age 6y 1mo [SD 3y 10mo], range 5mo to 14y 10mo). Results indicated that comparable score estimates (based on computer simulations) to the full-length tests can be achieved in a 20-item CAT version for all age ranges and for normative and clinical samples. No more than 13 to 16% of the items in the full-length tests were needed for any one administration. These results support further consideration of using CAT programs for accurate and efficient clinical assessments of physical functioning.

  13. Morphological and Functional Evaluation of Quadricuspid Aortic Valves Using Cardiac Computed Tomography

    PubMed Central

    Song, Inyoung; Park, Jung Ah; Choi, Bo Hwa; Shin, Je Kyoun; Chee, Hyun Keun; Kim, Jun Seok

    2016-01-01

    Objective The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). Materials and Methods We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. Results All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Conclusion Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV. PMID:27390538

  14. Planar quantum quenches: computation of exact time-dependent correlation functions at large N

    NASA Astrophysics Data System (ADS)

    Cortés Cubero, Axel

    2016-08-01

    We study a quantum quench of an integrable quantum field theory in the planar infinite-N limit. Unlike isovector-valued O(N) models, matrix-valued field theories in the infinite-N limit are not solvable by the Hartre-Fock approximation, and are nontrivial interacting theories. We study quenches with initial states that are color-charge neutral, correspond to integrability-preserving boundary conditions, and that lead to nontrivial correlation functions of operators. We compute exactly at infinite N, the time-dependent one- and two-point correlation functions of the energy-momentum tensor and renormalized field operator after this quench using known exact form factors. This computation can be done fully analytically, due the simplicity of the initial state and the form factors in the planar limit. We also show that this type of quench preserves factorizability at all times, allows for particle transmission from the pre-quench state, while still having nontrivial interacting post-quench dynamics.

  15. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    PubMed

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude

  16. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    PubMed

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude

  17. The Time Transfer Functions: an efficient tool to compute range, Doppler and astrometric observables

    NASA Astrophysics Data System (ADS)

    Hees, A.; Bertone, S.; Le Poncin-Lafitte, C.; Teyssandier, P.

    2015-12-01

    Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function--called a reception (or emission) time transfer function--has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developped in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant G for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in G when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives.

  18. Experimental evidence validating the computational inference of functional associations from gene fusion events: a critical survey.

    PubMed

    Promponas, Vasilis J; Ouzounis, Christos A; Iliopoulos, Ioannis

    2014-05-01

    More than a decade ago, a number of methods were proposed for the inference of protein interactions, using whole-genome information from gene clusters, gene fusions and phylogenetic profiles. This structural and evolutionary view of entire genomes has provided a valuable approach for the functional characterization of proteins, especially those without sequence similarity to proteins of known function. Furthermore, this view has raised the real possibility to detect functional associations of genes and their corresponding proteins for any entire genome sequence. Yet, despite these exciting developments, there have been relatively few cases of real use of these methods outside the computational biology field, as reflected from citation analysis. These methods have the potential to be used in high-throughput experimental settings in functional genomics and proteomics to validate results with very high accuracy and good coverage. In this critical survey, we provide a comprehensive overview of 30 most prominent examples of single pairwise protein interaction cases in small-scale studies, where protein interactions have either been detected by gene fusion or yielded additional, corroborating evidence from biochemical observations. Our conclusion is that with the derivation of a validated gold-standard corpus and better data integration with big experiments, gene fusion detection can truly become a valuable tool for large-scale experimental biology.

  19. Experimental evidence validating the computational inference of functional associations from gene fusion events: a critical survey

    PubMed Central

    Promponas, Vasilis J.; Ouzounis, Christos A.; Iliopoulos, Ioannis

    2014-01-01

    More than a decade ago, a number of methods were proposed for the inference of protein interactions, using whole-genome information from gene clusters, gene fusions and phylogenetic profiles. This structural and evolutionary view of entire genomes has provided a valuable approach for the functional characterization of proteins, especially those without sequence similarity to proteins of known function. Furthermore, this view has raised the real possibility to detect functional associations of genes and their corresponding proteins for any entire genome sequence. Yet, despite these exciting developments, there have been relatively few cases of real use of these methods outside the computational biology field, as reflected from citation analysis. These methods have the potential to be used in high-throughput experimental settings in functional genomics and proteomics to validate results with very high accuracy and good coverage. In this critical survey, we provide a comprehensive overview of 30 most prominent examples of single pairwise protein interaction cases in small-scale studies, where protein interactions have either been detected by gene fusion or yielded additional, corroborating evidence from biochemical observations. Our conclusion is that with the derivation of a validated gold-standard corpus and better data integration with big experiments, gene fusion detection can truly become a valuable tool for large-scale experimental biology. PMID:23220349

  20. Intersections between the Autism Spectrum and the Internet: Perceived Benefits and Preferred Functions of Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted

    2014-01-01

    An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…

  1. Feasibility Study for a Remote Terminal Central Computing Facility Serving School and College Institutions. Volume I, Functional Requirements.

    ERIC Educational Resources Information Center

    International Business Machines Corp., White Plains, NY.

    The economic and technical feasibility of providing a remote terminal central computing facility to serve a group of 25-75 secondary schools and colleges was investigated. The general functions of a central facility for an educational cluster were defined to include training in computer techniques, the solution of student development problems in…

  2. A Computational Method Designed to Aid in the Teaching of Copolymer Composition and Microstructure as a Function of Conversion.

    ERIC Educational Resources Information Center

    Coleman, M. M.; Varnell, W. D.

    1982-01-01

    Describes a computer program (FORTRAN and APPLESOFT) demonstrating the effect of copolymer composition as a function of conversion, providing theoretical background and examples of types of information gained from computer calculations. Suggests that the program enhances undergraduate students' understanding of basic copolymerization theory.…

  3. Application of the new neutron monitor yield function computed for different altitudes to an analysis of GLEs

    NASA Astrophysics Data System (ADS)

    Mishev, Alexander; Usoskin, Ilya

    2016-07-01

    A precise analysis of SEP (solar energetic particle) spectral and angular characteristics using neutron monitor (NM) data requires realistic modeling of propagation of those particles in the Earth's magnetosphere and atmosphere. On the basis of the method including a sequence of consecutive steps, namely a detailed computation of the SEP assymptotic cones of acceptance, and application of a neutron monitor yield function and convenient optimization procedure, we derived the rigidity spectra and anisotropy characteristics of several major GLEs. Here we present several major GLEs of the solar cycle 23: the Bastille day event on 14 July 2000 (GLE 59), GLE 69 on 20 January 2005, and GLE 70 on 13 December 2006. The SEP spectra and pitch angle distributions were computed in their dynamical development. For the computation we use the newly computed yield function of the standard 6NM64 neutron monitor for primary proton and alpha CR nuclei. In addition, we present new computations of NM yield function for the altitudes of 3000 m and 5000 m above the sea level The computations were carried out with Planetocosmics and CORSIKA codes as standardized Monte-Carlo tools for atmospheric cascade simulations. The flux of secondary neutrons and protons was computed using the Planetocosmics code appliyng a realistic curved atmospheric. Updated information concerning the NM registration efficiency for secondary neutrons and protons was used. The derived results for spectral and angular characteristics using the newly computed NM yield function at several altitudes are compared with the previously obtained ones using the double attenuation method.

  4. Multiscale Theoretical and Computational Modeling of the Synthesis, Structure and Performance of Functional Carbon Materials

    NASA Astrophysics Data System (ADS)

    Mushrif, Samir Hemant

    2010-09-01

    Functional carbon-based/supported materials, including those doped with transition metal, are widely applied in hydrogen mediated catalysis and are currently being designed for hydrogen storage applications. This thesis focuses on acquiring a fundamental understanding and quantitative characterization of: (i) the chemistry of their synthesis procedure, (ii) their microstructure and chemical composition and (iii) their functionality, using multiscale modeling and simulation methodologies. Palladium and palladium(II) acetylacetonate are the transition metal and its precursor of interest, respectively. A first-principles modeling approach consisting of the planewave-pseudopotential implementation of the Kohn-Sham density functional theory, combined with the Car-Parrinello molecular dynamics, is implemented to model the palladium doping step in the synthesis of carbon-based/supported material and its interaction with hydrogen. The electronic structure is analyzed using the electron localization function and, when required, the hydrogen interaction dynamics are accelerated and the energetics are computed using the metadynamics technique. Palladium pseudopotentials are tested and validated for their use in a hydrocarbon environment by successfully computing the experimentally observed crystal structure of palladium(II) acetylacetonate. Long-standing hypotheses related to the palladium doping process are confirmed and new fundamental insights about its molecular chemistry are revealed. The dynamics, mechanism and energy landscape and barriers of hydrogen adsorption and migration on and desorption from the carbon-based/supported palladium clusters are reported for the first time. The effects of palladium doping and of the synthesis procedure on the pore structure of palladium-doped activated carbon fibers are quantified by applying novel statistical mechanical based methods to the experimental physisorption isotherms. The drawbacks of the conventional adsorption-based pore

  5. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE PAGES

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less

  6. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    SciTech Connect

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.

  7. Computer Simulations Reveal Multiple Functions for Aromatic Residues in Cellulase Enzymes (Fact Sheet)

    SciTech Connect

    Not Available

    2012-07-01

    NREL researchers use high-performance computing to demonstrate fundamental roles of aromatic residues in cellulase enzyme tunnels. National Renewable Energy Laboratory (NREL) computer simulations of a key industrial enzyme, the Trichoderma reesei Family 6 cellulase (Cel6A), predict that aromatic residues near the enzyme's active site and at the entrance and exit tunnel perform different functions in substrate binding and catalysis, depending on their location in the enzyme. These results suggest that nature employs aromatic-carbohydrate interactions with a wide variety of binding affinities for diverse functions. Outcomes also suggest that protein engineering strategies in which mutations are made around the binding sites may require tailoring specific to the enzyme family. Cellulase enzymes ubiquitously exhibit tunnels or clefts lined with aromatic residues for processing carbohydrate polymers to monomers, but the molecular-level role of these aromatic residues remains unknown. In silico mutation of the aromatic residues near the catalytic site of Cel6A has little impact on the binding affinity, but simulation suggests that these residues play a major role in the glucopyranose ring distortion necessary for cleaving glycosidic bonds to produce fermentable sugars. Removal of aromatic residues at the entrance and exit of the cellulase tunnel, however, dramatically impacts the binding affinity. This suggests that these residues play a role in acquiring cellulose chains from the cellulose crystal and stabilizing the reaction product, respectively. These results illustrate that the role of aromatic-carbohydrate interactions varies dramatically depending on the position in the enzyme tunnel. As aromatic-carbohydrate interactions are present in all carbohydrate-active enzymes, the results have implications for understanding protein structure-function relationships in carbohydrate metabolism and recognition, carbon turnover in nature, and protein engineering strategies for

  8. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization

  9. Using Data Mining and Computational Approaches to Study Intermediate Filament Structure and Function.

    PubMed

    Parry, David A D

    2016-01-01

    Experimental and theoretical research aimed at determining the structure and function of the family of intermediate filament proteins has made significant advances over the past 20 years. Much of this has either contributed to or relied on the amino acid sequence databases that are now available online, and the data mining approaches that have been developed to analyze these sequences. As the quality of sequence data is generally high, it follows that it is the design of the computational and graphical methodologies that are of especial importance to researchers who aspire to gain a greater understanding of those sequence features that specify both function and structural hierarchy. However, these techniques are necessarily subject to limitations and it is important that these be recognized. In addition, no single method is likely to be successful in solving a particular problem, and a coordinated approach using a suite of methods is generally required. A final step in the process involves the interpretation of the results obtained and the construction of a working model or hypothesis that suggests further experimentation. While such methods allow meaningful progress to be made it is still important that the data are interpreted correctly and conservatively. New data mining methods are continually being developed, and it can be expected that even greater understanding of the relationship between structure and function will be gleaned from sequence data in the coming years.

  10. Novel hold-release functionality in a P300 brain-computer interface

    NASA Astrophysics Data System (ADS)

    Alcaide-Aguirre, R. E.; Huggins, J. E.

    2014-12-01

    Assistive technology control interface theory describes interface activation and interface deactivation as distinct properties of any control interface. Separating control of activation and deactivation allows precise timing of the duration of the activation. Objective. We propose a novel P300 brain-computer interface (BCI) functionality with separate control of the initial activation and the deactivation (hold-release) of a selection. Approach. Using two different layouts and off-line analysis, we tested the accuracy with which subjects could (1) hold their selection and (2) quickly change between selections. Main results. Mean accuracy across all subjects for the hold-release algorithm was 85% with one hold-release classification and 100% with two hold-release classifications. Using a layout designed to lower perceptual errors, accuracy increased to a mean of 90% and the time subjects could hold a selection was 40% longer than with the standard layout. Hold-release functionality provides improved response time (6-16 times faster) over the initial P300 BCI selection by allowing the BCI to make hold-release decisions from very few flashes instead of after multiple sequences of flashes. Significance. For the BCI user, hold-release functionality allows for faster, more continuous control with a P300 BCI, creating new options for BCI applications.

  11. Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study

    PubMed Central

    Allam, Sushmita L.; Bouteiller, Jean-Marie C.; Hu, Eric Y.; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics. PMID:26480028

  12. Computational Refinement of Functional Single Nucleotide Polymorphisms Associated with ATM Gene

    PubMed Central

    George Priya Doss, C.; Rajith, B.

    2012-01-01

    Background Understanding and predicting molecular basis of disease is one of the major challenges in modern biology and medicine. SNPs associated with complex disorders can create, destroy, or modify protein coding sites. Single amino acid substitutions in the ATM gene are the most common forms of genetic variations that account for various forms of cancer. However, the extent to which SNPs interferes with the gene regulation and affects cancer susceptibility remains largely unknown. Principal findings We analyzed the deleterious nsSNPs associated with ATM gene based on different computational methods. An integrative scoring system and sequence conservation of amino acid residues was adapted for a priori nsSNP analysis of variants associated with cancer. We further extended our approach on SNPs that could potentially influence protein Post Translational Modifications in ATM gene. Significance In the lack of adequate prior reports on the possible deleterious effects of nsSNPs, we have systematically analyzed and characterized the functional variants in both coding and non coding region that can alter the expression and function of ATM gene. In silico characterization of nsSNPs affecting ATM gene function can aid in better understanding of genetic differences in disease susceptibility. PMID:22529920

  13. Reproducibility of physiologic parameters obtained using functional computed tomography in mice

    NASA Astrophysics Data System (ADS)

    Krishnamurthi, Ganapathy; Stantz, Keith M.; Steinmetz, Rosemary; Hutchins, Gary D.; Liang, Yun

    2004-04-01

    High-speed X-ray computed tomography (CT) has the potential to observe the transport of iodinated radio-opaque contrast agent (CA) through tissue enabling the quantification of tissue physiology in organs and tumors. The concentration of Iodine in the tissue and in the left ventricle is extracted as a function of time and is fit to a compartmental model for physiologic parameter estimation. The reproducibility of the physiologic parameters depend on the (1) The image-sampling rate. According to our simulations 5-second sampling is required for CA injection rates of 1.0ml/min (2) the compartmental model should reflect the real tissue function to give meaning results. In order to verify these limits a functional CT study was carried out in a group of 3 mice. Dynamic CT scans were performed on all the mice with 0.5ml/min, 1ml/min and 2ml/min CA injection rates. The physiologic parameters were extracted using 4 parameter and 6 parameter two compartmental models (2CM). Single factor ANOVA did not indicate a significant difference in the perfusion, in the kidneys for the different injection rates. The physiologic parameter obtained using the 6-parameter 2CM model was in line with literature values and the 6-parameter significantly improves chi-square goodness of fits for two cases.

  14. Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study.

    PubMed

    Allam, Sushmita L; Bouteiller, Jean-Marie C; Hu, Eric Y; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics.

  15. 95Mo nuclear magnetic resonance parameters of molybdenum hexacarbonyl from density functional theory: appraisal of computational and geometrical parameters.

    PubMed

    Cuny, Jérôme; Sykina, Kateryna; Fontaine, Bruno; Le Pollès, Laurent; Pickard, Chris J; Gautier, Régis

    2011-11-21

    Solid-state (95)Mo nuclear magnetic resonance (NMR) properties of molybdenum hexacarbonyl have been computed using density functional theory (DFT) based methods. Both quadrupolar coupling and chemical shift parameters were evaluated and compared with parameters of high precision determined using single-crystal (95)Mo NMR experiments. Within a molecular approach, the effects of major computational parameters, i.e. basis set, exchange-correlation functional, treatment of relativity, have been evaluated. Except for the isotropic parameter of both chemical shift and chemical shielding, computed NMR parameters are more sensitive to geometrical variations than computational details. Relativistic effects do not play a crucial part in the calculations of such parameters for the 4d transition metal, in particular isotropic chemical shift. Periodic DFT calculations were tackled to measure the influence of neighbouring molecules on the crystal structure. These effects have to be taken into account to compute accurate solid-state (95)Mo NMR parameters even for such an inorganic molecular compound.

  16. Vibration of isotropic and composite plates using computed shape function and its application to elastic support optimization

    NASA Astrophysics Data System (ADS)

    Kong, Jackson

    2009-10-01

    Vibration of plates with various boundary and internal support conditions is analyzed, based on classical thin-plate theory and the Rayleigh-Ritz approach. To satisfy the support conditions, a new set of admissible functions, namely the computed shape functions, is applied to each of the two orthogonal in-plane directions. Similar to conventional finite element shape functions, parameters associated with each term of the proposed functions represent the actual displacements of the plates, thus making the method easily applicable to a wide range of support conditions, including continuous or partial edge supports and discrete internal supports. The method can also be applied to plates consisting of rectangular segments, like an L-shape plate, which sub-domains can be formulated using the computed shape functions and subsequently assembled in the usual finite element manner. Unlike many other admissible functions proposed in the literature, however, the computed shape functions presented herein are C 1—continuous and involve no complicated mathematical functions; they can be easily computed a priori by means of a continuous beam computer program and only the conventional third-order beam shape functions are involved in subsequent formulation. In all the examples given herein, only a few terms of these functions are sufficient to obtain accurate frequencies, thus demonstrating its computational effectiveness and accuracy. The method is further extended to the study of optimal location and stiffness of discrete elastic supports for maximizing the fundamental frequency of plates. Unlike rigid point supports with infinite stiffness, which optimal locations have been studied by many researchers, only discrete supports with a finite stiffness is considered in this paper. The optimal location and stiffness of discrete supports are determined for isotropic plates and laminated plates with various stacking sequences, which results are presented for the first time in

  17. Structure function analysis of serpin super-family: "a computational approach".

    PubMed

    Singh, Poonam; Jairajpuri, Mohamad Aman

    2014-01-01

    Serine Protease inhibitors (serpins) are a super-family of proteins that controls the proteinases involved in the inflammation, complementation, coagulation and fibrinolytic pathways. Serpins are prone to conformational diseases due to a complex inhibition mechanism that involves large scale conformational change, and their susceptibility to undergo point mutations might lead to functional defects. Serpins are associated with diseases like emphysema/cirrhosis, angioedema, familial dementia, chronic obstructive bronchitis and thrombosis. Serpin polymerization based pathologies are fairly widespread and devising a cure has been difficult due to lack of clarity regarding its mechanism. Serpin can exist in various conformational states and has a variable cofactor binding ability. It has a large genome and proteome database which can be utilized to gain critical insight into their structure, mechanism and defects. Comprehensive computational studies on the serpin family is lacking, most of the work done till date is limited and deals mostly with few individual serpins. We have tried to analyze few aspect of this family using diverse computational biology tools and have shown the following: a) the importance of residue burial linked shift in the conformational stability as a major factor in increasing the polymer propensity in serpins. b) Amino acids involved in the polymerization are in general completely buried in the native conformation. c) An isozyme specific antithrombin study showed the structural basis of improved heparin binding to beta antithrombin as compared to alpha-antithrombin. d) A comprehensive cavity analysis showed its importance in inhibition and polymerizaiton and finally e) an interface analysis of various serpin protease complexes identified critical evolutionary conserved residues in exosite that determines its protease specificity. This work introduces the problem and emphasizes on the need for in-depth computational studies of serpin superfamily

  18. Parallel-META 2.0: Enhanced Metagenomic Data Analysis with Functional Annotation, High Performance Computing and Advanced Visualization

    PubMed Central

    Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale. PMID:24595159

  19. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    PubMed

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  20. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  1. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  2. Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function?

    PubMed Central

    Chan, Micaela Y.; Haber, Sara; Drew, Linda M.; Park, Denise C.

    2016-01-01

    Purpose of the Study: Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. Design and Methods: A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interaction, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Results: Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. Implications: iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering relevant technological devices have the added advantage of providing older adults with technological skills useful in facilitating everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. PMID:24928557

  3. Computational modeling of heterogeneity and function of CD4+ T cells

    PubMed Central

    Carbo, Adria; Hontecillas, Raquel; Andrew, Tricity; Eden, Kristin; Mei, Yongguo; Hoops, Stefan; Bassaganya-Riera, Josep

    2014-01-01

    The immune system is composed of many different cell types and hundreds of intersecting molecular pathways and signals. This large biological complexity requires coordination between distinct pro-inflammatory and regulatory cell subsets to respond to infection while maintaining tissue homeostasis. CD4+ T cells play a central role in orchestrating immune responses and in maintaining a balance between pro- and anti- inflammatory responses. This tight balance between regulatory and effector reactions depends on the ability of CD4+ T cells to modulate distinct pathways within large molecular networks, since dysregulated CD4+ T cell responses may result in chronic inflammatory and autoimmune diseases. The CD4+ T cell differentiation process comprises an intricate interplay between cytokines, their receptors, adaptor molecules, signaling cascades and transcription factors that help delineate cell fate and function. Computational modeling can help to describe, simulate, analyze, and predict some of the behaviors in this complicated differentiation network. This review provides a comprehensive overview of existing computational immunology methods as well as novel strategies used to model immune responses with a particular focus on CD4+ T cell differentiation. PMID:25364738

  4. Development of computer-aided functions in clinical neurosurgery with PACS

    NASA Astrophysics Data System (ADS)

    Mukasa, Minoru; Aoki, Makoto; Satoh, Minoru; Kowada, Masayoshi; Kikuchi, K.

    1991-07-01

    The introduction of the "Picture Archiving and Communications System (known as PACS)," provides many benefits, including the application of C.A.D., (Computer Aided Diagnosis). Clinically, this allows for the measurement and design of an operation to be easily completed with the CRT monitors of PACS rather than with film, as has been customary in the past. Under the leadership of the Department of Neurosurgery, Akita University School of Medicine, and Southern Tohoku Research Institute for Neuroscience, Koriyama, new computer aided functions with EFPACS (Fuji Electric's PACS) have been developed for use in clinical neurosurgery. This image processing is composed of three parts as follows: (1) Automatic mapping of small lesions depicted on Magnetic Resonance (or MR) images on the brain atlas. (2) Superimposition of two angiographic films onto a single synthesized image. (3) Automatic mapping of the lesion's position (as shown on the. CT images) on the processing image referred to in the foregoing clause 2. The processing in the clause (1) provides a reference for anatomical estimation. The processing in the clause (2) is used for general analysis of the condition of a disease. The processing in the clause (3) is used to design the operation. This image processing is currently being used with good results.

  5. Comparison of functional MRI image realignment tools using a computer-generated phantom.

    PubMed

    Morgan, V L; Pickens, D R; Hartmann, S L; Price, R R

    2001-09-01

    This study discusses the development of a computer-generated phantom to compare the effects of image realignment programs on functional MRI (fMRI) pixel activation. The phantom is a whole-head MRI volume with added random noise, activation, and motion. It allows simulation of realistic head motions with controlled areas of activation. Without motion, the phantom shows the effects of realignment on motion-free data sets. Prior to realignment, the phantom illustrates some activation corruption due to motion. Finally, three widely used realignment packages are examined. The results showed that the most accurate algorithms are able to increase specificity through accurate realignment while maintaining sensitivity through effective resampling techniques. In fact, accurate realignment alone is not a powerful indicator of the most effective algorithm in terms of true activation.

  6. Computing frequency by using generalized zero-crossing applied to intrinsic mode functions

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2006-01-01

    This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.

  7. Computed myography: three-dimensional reconstruction of motor functions from surface EMG data

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2008-12-01

    We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.

  8. Computational genomic identification and functional reconstitution of plant natural product biosynthetic pathways

    PubMed Central

    2016-01-01

    Covering: 2003 to 2016 The last decade has seen the first major discoveries regarding the genomic basis of plant natural product biosynthetic pathways. Four key computationally driven strategies have been developed to identify such pathways, which make use of physical clustering, co-expression, evolutionary co-occurrence and epigenomic co-regulation of the genes involved in producing a plant natural product. Here, we discuss how these approaches can be used for the discovery of plant biosynthetic pathways encoded by both chromosomally clustered and non-clustered genes. Additionally, we will discuss opportunities to prioritize plant gene clusters for experimental characterization, and end with a forward-looking perspective on how synthetic biology technologies will allow effective functional reconstitution of candidate pathways using a variety of genetic systems. PMID:27321668

  9. Computational design of intrinsic molecular rectifiers based on asymmetric functionalization of N-phenylbenzamide

    SciTech Connect

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.

    2015-12-08

    Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.

  10. Computational design of intrinsic molecular rectifiers based on asymmetric functionalization of N-phenylbenzamide

    DOE PAGES

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.

    2015-11-03

    Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findingsmore » are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.« less

  11. Computational design of intrinsic molecular rectifiers based on asymmetric functionalization of N-phenylbenzamide

    SciTech Connect

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.

    2015-11-03

    Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.

  12. Computational Simulation of a Simple Pendulum Driven by a Natural Chaotic Function

    NASA Astrophysics Data System (ADS)

    Tomesh, Trevor

    2010-03-01

    A simple pendulum is computationally modeled and driven according to the natural non-linear dynamical functions that arise out of the Hodgkin-Huxley membrane model of squid giant axons. Driving a neural membrane with a sinusoidal current can stimulate chaotic potential oscillations that can be modeled mathematically. The solution of the Hodgkin-Huxley membrane model provides the amplitude of the impulse to the simple pendulum at the lowest point in its swing. The phase-space plot of a simple harmonic oscillator, randomly driven chaotic oscillator, and Hodgkin-Huxley driven chaotic oscillator are compared. The similarities and differences between the motion of the pendulum as the result of the Hodgkin-Huxley driving impulse and a random impulse are explored.

  13. Localization of functional adrenal tumors by computed tomography and venous sampling

    SciTech Connect

    Dunnick, N.R.; Doppman, J.L.; Gill, J.R. Jr.; Strott, C.A.; Keiser, H.R.; Brennan, M.F.

    1982-02-01

    Fifty-eight patients with functional lesions of the adrenal glands underwent radiographic evaluation. Twenty-eight patients had primary aldosteronism (Conn syndrome), 20 had Cushing syndrome, and 10 had pheochromocytoma. Computed tomography (CT) correctly identified adrenal tumors in 11 (61%) of 18 patients with aldosteronomas, 6 of 6 patients with benign cortisol-producing adrenal tumors, and 5 (83%) of 6 patients with pheochromocytomas. No false-positive diagnoses were encountered among patients with adrenal adenomas. Bilateral adrenal hyperplasia appeared on CT scans as normal or prominent adrenal glands with a normal configuration; however, CT was not able to exclude the presence of small adenomas. Adrenal venous sampling was correct in each case, and reliably distinguished adrenal tumors from hyperplasia. Recurrent pheochromocytomas were the most difficult to loclize on CT due to the surgical changes in the region of the adrenals and the frequent extra-adrenal locations.

  14. Computational Design of Intrinsic Molecular Rectifiers Based on Asymmetric Functionalization of N-Phenylbenzamide.

    PubMed

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F A; Brudvig, Gary W; Crabtree, Robert H; Schmuttenmaer, Charles A; Batista, Victor S

    2015-12-01

    We report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.

  15. Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons

    NASA Astrophysics Data System (ADS)

    Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.

    1998-03-01

    Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.

  16. A computationally efficient double hybrid density functional based on the random phase approximation.

    PubMed

    Grimme, Stefan; Steinmetz, Marc

    2016-08-01

    We present a revised form of a double hybrid density functional (DHDF) dubbed PWRB95. It contains semi-local Perdew-Wang exchange and Becke95 correlation with a fixed amount of 50% non-local Fock exchange. New features are that the robust random phase approximation (RPA) is used to calculate the non-local correlation part instead of a second-order perturbative treatment as in standard DHDF, and the non-self-consistent evaluation of the Fock exchange with KS-orbitals at the GGA level which leads to a significant reduction of the computational effort. To account for London dispersion effects we include the non-local VV10 dispersion functional. Only three empirical scaling parameters were adjusted. The PWRB95 results for extensive standard thermochemical benchmarks (GMTKN30 data base) are compared to those of well-known functionals from the classes of (meta-)GGAs, (meta-)hybrid functionals, and DHDFs, as well as to standard (direct) RPA. The new method is furthermore tested on prototype bond activations with (Ni/Pd)-based transition metal catalysts, and two difficult cases for DHDF, namely the isomerization reaction of the [Cu2(en)2O2](2+) complex and the singlet-triplet energy difference in highly unsaturated cyclacenes. The results show that PWRB95 is almost as accurate as standard DHDF for main-group thermochemistry but has a similar or better performance for non-covalent interactions, more difficult transition metal containing molecules and other electronically problematic cases. Because of its relatively weak basis set dependence, PWRB95 can be applied even in combination with AO basis sets of only triple-zeta quality which yields huge overall computational savings by a factor of about 40 compared to standard DHDF/'quadruple-zeta' calculations. Structure optimizations of small molecules with PWRB95 indicate an accurate description of bond distances superior to that provided by TPSS-D3, PBE0-D3, or other RPA type methods. PMID:26695184

  17. Multi-Rate Mass Transfer : Computing the Memory Function Using Micro-Tomographic Images

    NASA Astrophysics Data System (ADS)

    Gouze, P.; Melean, Y.; Leborgne, T.; Carrera, J.

    2006-12-01

    Several in situ and laboratory experiments display strongly dissymmetrical breakthrough curves (BTC), ending up with a concentration decrease with time close to C(t) ~ t ^{-γ}. Matrix diffusion is a widely recognized process producing this class of non-Fickean transport behavior characterized by an apparently infinite variance of the temporal distribution. The matrix diffusion sink/source term in the macroscopic advection dispersion transport equation can be expressed by the convolution product of a memory function G(t) times the concentration measured in the mobile (advective) part of the aquifer. Memory function, displaying power law decrease C(t) ~ t ^{1-γ} at early time, can be obtained by assuming an immobile domain made of single diffusion length structures, such as spheres or slabs. Indeed, diffusion in a distribution of spheres of different size may produce a large spectrum of power law memory function. However, the structure of the immobile domain of real rocks is generally completely different from spheres-made rocks. Here, we present a method for calculating the true memory function of heterogeneous structures (reef calcareous rocks) using 3D X-Ray micro-tomography images of rock samples. Several steps of data processing are required to quantify precisely the structure, the porosity distribution and the properties of the mobile/immobile interface, before solving the diffusion problem (here using random walk approach). Conversely, tracer experiments (at meter scale) are performed in the same medium. The obtained BTCs display long tailing decrease over several orders of magnitude. Using very few assumptions, one compute memory functions (measured on centimeter scale samples) similar to those expected to control the BTCs at meter scale. Results show that the memory function is strongly controlled by the diffusivity distribution in the matrix and, to a lesser extent, by the mobile-immobile interface geometry; so that power law exponents of the BTCs tail

  18. Brain-computer interface using a simplified functional near-infrared spectroscopy system.

    PubMed

    Coyle, Shirley M; Ward, Tomás E; Markham, Charles M

    2007-09-01

    A brain-computer interface (BCI) is a device that allows a user to communicate with external devices through thought processes alone. A novel signal acquisition tool for BCIs is near-infrared spectroscopy (NIRS), an optical technique to measure localized cortical brain activity. The benefits of using this non-invasive modality are safety, portability and accessibility. A number of commercial multi-channel NIRS system are available; however we have developed a straightforward custom-built system to investigate the functionality of a fNIRS-BCI system. This work describes the construction of the device, the principles of operation and the implementation of a fNIRS-BCI application, 'Mindswitch' that harnesses motor imagery for control. Analysis is performed online and feedback of performance is presented to the user. Mindswitch presents a basic 'on/off' switching option to the user, where selection of either state takes 1 min. Initial results show that fNIRS can support simple BCI functionality and shows much potential. Although performance may be currently inferior to many EEG systems, there is much scope for development particularly with more sophisticated signal processing and classification techniques. We hope that by presenting fNIRS as an accessible and affordable option, a new avenue of exploration will open within the BCI research community and stimulate further research in fNIRS-BCIs. PMID:17873424

  19. Characterizing Molecular Structure by Combining Experimental Measurements with Density Functional Theory Computations

    NASA Astrophysics Data System (ADS)

    Lopez-Encarnacion, Juan M.

    2016-06-01

    In this talk, the power and synergy of combining experimental measurements with density functional theory computations as a single tool to unambiguously characterize the molecular structure of complex atomic systems is shown. Here, we bring three beautiful cases where the interaction between the experiment and theory is in very good agreement for both finite and extended systems: 1) Characterizing Metal Coordination Environments in Porous Organic Polymers: A Joint Density Functional Theory and Experimental Infrared Spectroscopy Study 2) Characterization of Rhenium Compounds Obtained by Electrochemical Synthesis After Aging Process and 3) Infrared Study of H(D)2 + Co4+ Chemical Reaction: Characterizing Molecular Structures. J.M. López-Encarnación, K.K. Tanabe, M.J.A. Johnson, J. Jellinek, Chemistry-A European Journal 19 (41), 13646-13651 A. Vargas-Uscategui, E. Mosquera, J.M. López-Encarnación, B. Chornik, R. S. Katiyar, L. Cifuentes, Journal of Solid State Chemistry 220, 17-21

  20. Computational identification of riboswitches based on RNA conserved functional sequences and conformations.

    PubMed

    Chang, Tzu-Hao; Huang, Hsien-Da; Wu, Li-Ching; Yeh, Chi-Ta; Liu, Baw-Jhiune; Horng, Jorng-Tzong

    2009-07-01

    Riboswitches are cis-acting genetic regulatory elements within a specific mRNA that can regulate both transcription and translation by interacting with their corresponding metabolites. Recently, an increasing number of riboswitches have been identified in different species and investigated for their roles in regulatory functions. Both the sequence contexts and structural conformations are important characteristics of riboswitches. None of the previously developed tools, such as covariance models (CMs), Riboswitch finder, and RibEx, provide a web server for efficiently searching homologous instances of known riboswitches or considers two crucial characteristics of each riboswitch, such as the structural conformations and sequence contexts of functional regions. Therefore, we developed a systematic method for identifying 12 kinds of riboswitches. The method is implemented and provided as a web server, RiboSW, to efficiently and conveniently identify riboswitches within messenger RNA sequences. The predictive accuracy of the proposed method is comparable with other previous tools. The efficiency of the proposed method for identifying riboswitches was improved in order to achieve a reasonable computational time required for the prediction, which makes it possible to have an accurate and convenient web server for biologists to obtain the results of their analysis of a given mRNA sequence. RiboSW is now available on the web at http://RiboSW.mbc.nctu.edu.tw/. PMID:19460868

  1. Brain computer interface using a simplified functional near-infrared spectroscopy system

    NASA Astrophysics Data System (ADS)

    Coyle, Shirley M.; Ward, Tomás E.; Markham, Charles M.

    2007-09-01

    A brain-computer interface (BCI) is a device that allows a user to communicate with external devices through thought processes alone. A novel signal acquisition tool for BCIs is near-infrared spectroscopy (NIRS), an optical technique to measure localized cortical brain activity. The benefits of using this non-invasive modality are safety, portability and accessibility. A number of commercial multi-channel NIRS system are available; however we have developed a straightforward custom-built system to investigate the functionality of a fNIRS-BCI system. This work describes the construction of the device, the principles of operation and the implementation of a fNIRS-BCI application, 'Mindswitch' that harnesses motor imagery for control. Analysis is performed online and feedback of performance is presented to the user. Mindswitch presents a basic 'on/off' switching option to the user, where selection of either state takes 1 min. Initial results show that fNIRS can support simple BCI functionality and shows much potential. Although performance may be currently inferior to many EEG systems, there is much scope for development particularly with more sophisticated signal processing and classification techniques. We hope that by presenting fNIRS as an accessible and affordable option, a new avenue of exploration will open within the BCI research community and stimulate further research in fNIRS-BCIs.

  2. Can ultrasound and computed tomography replace high-dose urography in patients with impaired renal function?

    PubMed

    Webb, J A; Reznek, R H; White, F E; Cattell, W R; Fry, I K; Baker, L R

    1984-01-01

    Ninety-one patients with unexplained impaired renal function were investigated by high-dose urography, ultrasound and computed tomography (CT) without contrast. The aim was to evaluate the role of ultrasound and CT in renal failure, in particular their ability to define renal length and to show collecting system dilatation. In the majority of patients, renal length could be measured accurately by ultrasound. Measurements were less that those at urography because of the absence of magnification. Renal measurement by CT was not a sufficiently accurate indicator of renal length to be of clinical use. Both ultrasound and CT were sensitive detectors of collecting system dilatation: neither technique missed any case diagnosed by urography. However, in the presence of staghorn calculi or multiple cysts, neither ultrasound nor CT could exclude collecting system dilatation. CT was the only technique which demonstrated retroperitoneal nodes or fibrosis causing obstruction. It is proposed that the first investigation when renal function is impaired should be ultrasound, with plain films and renal tomograms to show calculi. CT should be reserved for those patients in whom ultrasound is not diagnostic or in whom ultrasound shows collecting system dilatation but does not demonstrate the cause. Using this scheme, ultrasound, plain radiography and CT would have demonstrated collecting system dilatation and, where appropriate, shown the cause of obstruction in 84 per cent of patients in this series. Only 16 per cent of patients would have required either high-dose urography or retrograde ureterograms.

  3. Combining regression trees and radial basis function networks.

    PubMed

    Orr, M; Hallam, J; Takezawa, K; Murra, A; Ninomiya, S; Oide, M; Leonard, T

    2000-12-01

    We describe a method for non-parametric regression which combines regression trees with radial basis function networks. The method is similar to that of Kubat, who was first to suggest such a combination, but has some significant improvements. We demonstrate the features of the new method, compare its performance with other methods on DELVE data sets and apply it to a real world problem involving the classification of soybean plants from digital images.

  4. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  5. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This

  6. Computed Tomography-Based Centrilobular Emphysema Subtypes Relate with Pulmonary Function

    PubMed Central

    Takahashi, Mamoru; Yamada, Gen; Koba, Hiroyuki; Takahashi, Hiroki

    2013-01-01

    Introduction: Centrilobular emphysema (CLE) is recognized as low attenuation areas (LAA) with centrilobular pattern on high-resolution computed tomography (CT). However, several shapes of LAA are observed. Our preliminary study showed three types of LAA in CLE by CT-pathologic correlations. This study was performed to investigate whether the morphological features of LAA affect pulmonary functions. Materials and Methods: A total of 73 Japanese patients with stable CLE (63 males, 10 females) were evaluated visually by CT and classified into three subtypes based on the morphology of LAA including shape and sharpness of border; patients with CLE who shows round or oval LAA with well-defined border (Subtype A), polygonal or irregular-shaped LAA with ill-defined border (Subtype B), and irregular-shaped LAA with ill-defined border coalesced with each other (Subtype C). CT score, pulmonary function test and smoking index were compared among three subtypes. Results: Twenty (27%), 45 (62%) and 8 cases (11%) of the patients were grouped into Subtype A, Subtype B and Subtype C, respectively. In CT score and smoking index, both Subtype B and Subtype C were significantly higher than Subtype A. In FEV1%, Subtype C was significantly lower than both Subtype A and Subtype B. In diffusing capacity of lung for carbon monoxide, Subtype B was significantly lower than Subtype A. Conclusion: The morphological differences of LAA may relate with an airflow limitation and alveolar diffusing capacity. To assess morphological features of LAA may be helpful for the expectation of respiratory function. PMID:23935765

  7. Indices of cognitive function measured in rugby union players using a computer-based test battery.

    PubMed

    MacDonald, Luke A; Minahan, Clare L

    2016-09-01

    The purpose of this study was to investigate the intra- and inter-day reliability of cognitive performance using a computer-based test battery in team-sport athletes. Eighteen elite male rugby union players (age: 19 ± 0.5 years) performed three experimental trials (T1, T2 and T3) of the test battery: T1 and T2 on the same day and T3, on the following day, 24 h later. The test battery comprised of four cognitive tests assessing the cognitive domains of executive function (Groton Maze Learning Task), psychomotor function (Detection Task), vigilance (Identification Task), visual learning and memory (One Card Learning Task). The intraclass correlation coefficients (ICCs) for the Detection Task, the Identification Task and the One Card Learning Task performance variables ranged from 0.75 to 0.92 when comparing T1 to T2 to assess intraday reliability, and 0.76 to 0.83 when comparing T1 and T3 to assess inter-day reliability. The ICCs for the Groton Maze Learning Task intra- and inter-day reliability were 0.67 and 0.57, respectively. We concluded that the Detection Task, the Identification Task and the One Card Learning Task are reliable measures of psychomotor function, vigilance, visual learning and memory in rugby union players. The reliability of the Groton Maze Learning Task is questionable (mean coefficient of variation (CV) = 19.4%) and, therefore, results should be interpreted with caution.

  8. COPD phenotypes on computed tomography and its correlation with selected lung function variables in severe patients

    PubMed Central

    da Silva, Silvia Maria Doria; Paschoal, Ilma Aparecida; De Capitani, Eduardo Mello; Moreira, Marcos Mello; Palhares, Luciana Campanatti; Pereira, Mônica Corso

    2016-01-01

    Background Computed tomography (CT) phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD) patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC) uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2), Slope of phase 2 (Slp2), and Slope of phase 3 (Slp3) of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath. Objective To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables. Subjects and methods Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC). The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP) or airway disease (AWD) phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables. Results Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD groups, but EMP patients had signicantly lower body mass index, worse obstruction, and shorter walked distance on 6MWT. Concerning VC, EMP patients had signicantly lower ETCO2, Slp2 and Slp3. Increases in Slp3 characterize heterogeneous involvement of the distal air spaces, as in AWD. Conclusion Visual assessment and

  9. Tools for Computing the AGN Feedback: Radio-loudness Distribution and the Kinetic Luminosity Function

    NASA Astrophysics Data System (ADS)

    La Franca, F.; Melini, G.; Fiore, F.

    2010-07-01

    We studied the active galactic nucleus (AGN) radio emission from a compilation of hard X-ray-selected samples, all observed in the 1.4 GHz band. A total of more than 1600 AGNs with 2-10 keV de-absorbed luminosities higher than 1042 erg s-1 cm-2 were used. For a sub-sample of about fifty z <~ 0.1 AGNs, it was possible to reach ~80% of radio detections and therefore, for the first time, it was possible to almost completely measure the probability distribution function of the ratio between the radio and the X-ray luminosity RX = log(L 1.4/LX ), where L 1.4/LX = νL ν(1.4 GHz)/LX (2-10 keV). The probability distribution function of RX was functionally fitted as dependent on the X-ray luminosity and redshift, P(RX |LX , z). It roughly spans over six decades (-7< RX <-1) and does not show any sign of bi-modality. The result is that the probability of finding large values of the RX ratio increases with decreasing X-ray luminosities and (possibly) with increasing redshift. No statistically significant difference was found between the radio properties of the X-ray absorbed (N H>1022 cm-2) and un-absorbed AGNs. Measurement of the probability distribution function of RX allowed us to compute the kinetic luminosity function and the kinetic energy density which, at variance with that assumed in many galaxy evolution models, is observed to decrease by about a factor of 5 at redshift below 0.5. About half of the kinetic energy density results in being produced by the more radio quiet (RX <-4) AGNs. In agreement with previous estimates, the AGN efficiency epsilonkin in converting the accreted mass energy into kinetic power (L_K=ɛ_kin\\dot{m} c^2) is, on average, epsilonkin ~= 5 × 10-3. The data suggest a possible increase of epsilonkin at low redshifts.

  10. Density functional theory computation of Nuclear Magnetic Resonance parameters in light and heavy nuclei

    NASA Astrophysics Data System (ADS)

    Sutter, Kiplangat

    This thesis illustrates the utilization of Density functional theory (DFT) in calculations of gas and solution phase Nuclear Magnetic Resonance (NMR) properties of light and heavy nuclei. Computing NMR properties is still a challenge and there are many unknown factors that are still being explored. For instance, influence of hydrogen-bonding; thermal motion; vibration; rotation and solvent effects. In one of the theoretical studies of 195Pt NMR chemical shift in cisplatin and its derivatives illustrated in Chapter 2 and 3 of this thesis. The importance of representing explicit solvent molecules explicitly around the Pt center in cisplatin complexes was outlined. In the same complexes, solvent effect contributed about half of the J(Pt-N) coupling constant. Indicating the significance of considering the surrounding solvent molecules in elucidating the NMR measurements of cisplatin binding to DNA. In chapter 4, we explore the Spin-Orbit (SO) effects on the 29Si and 13C chemical shifts induced by surrounding metal and ligands. The unusual Ni, Pd, Pt trends in SO effects to the 29Si in metallasilatrane complexes X-Si-(mu-mt)4-M-Y was interpreted based on electronic and relativistic effects rather than by structural differences between the complexes. In addition, we develop a non-linear model for predicting NMR SO effects in a series of organics bonded to heavy nuclei halides. In chapter 5, we extend the idea of "Chemist's orbitals" LMO analysis to the quantum chemical proton NMR computation of systems with internal resonance-assisted hydrogen bonds. Consequently, we explicitly link the relationship between the NMR parameters related to H-bonded systems and intuitive picture of a chemical bond from quantum calculations. The analysis shows how NMR signatures characteristic of H-bond can be explained by local bonding and electron delocalization concepts. One shortcoming of some of the anti-cancer agents like cisplatin is that they are toxic and researchers are looking for

  11. Response functions for computing absorbed dose to skeletal tissues from neutron irradiation

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Johnson, Perry; Jokisch, Derek W.; Eckerman, Keith F.; Bolch, Wesley E.

    2011-11-01

    Spongiosa in the adult human skeleton consists of three tissues—active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM50), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM50 targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM50 and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM50 DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me

  12. Micro-computed tomography assessment of fracture healing: relationships among callus structure, composition, and mechanical function.

    PubMed

    Morgan, Elise F; Mason, Zachary D; Chien, Karen B; Pfeiffer, Anthony J; Barnes, George L; Einhorn, Thomas A; Gerstenfeld, Louis C

    2009-02-01

    Non-invasive characterization of fracture callus structure and composition may facilitate development of surrogate measures of the regain of mechanical function. As such, quantitative computed tomography- (CT-) based analyses of fracture calluses could enable more reliable clinical assessments of bone healing. Although previous studies have used CT to quantify and predict fracture healing, it is unclear which of the many CT-derived metrics of callus structure and composition are the most predictive of callus mechanical properties. The goal of this study was to identify the changes in fracture callus structure and composition that occur over time and that are most closely related to the regain of mechanical function. Micro-computed tomography (microCT) imaging and torsion testing were performed on murine fracture calluses (n=188) at multiple post-fracture timepoints and under different experimental conditions that alter fracture healing. Total callus volume (TV), mineralized callus volume (BV), callus mineralized volume fraction (BV/TV), bone mineral content (BMC), tissue mineral density (TMD), standard deviation of mineral density (sigma(TMD)), effective polar moment of inertia (J(eff)), torsional strength, and torsional rigidity were quantified. Multivariate statistical analyses, including multivariate analysis of variance, principal components analysis, and stepwise regression were used to identify differences in callus structure and composition among experimental groups and to determine which of the microCT outcome measures were the strongest predictors of mechanical properties. Although calluses varied greatly in the absolute and relative amounts of mineralized tissue (BV, BMC, and BV/TV), differences among timepoints were most strongly associated with changes in tissue mineral density. Torsional strength and rigidity were dependent on mineral density as well as the amount of mineralized tissue: TMD, BV, and sigma(TMD) explained 62% of the variation in

  13. Functional Analysis of Metabolic Channeling and Regulation in Lignin Biosynthesis: A Computational Approach

    PubMed Central

    Lee, Yun; Escamilla-Treviño, Luis; Dixon, Richard A.; Voit, Eberhard O.

    2012-01-01

    Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G) and syringyl (S) lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid. PMID:23144605

  14. Insights into the function of ion channels by computational electrophysiology simulations.

    PubMed

    Kutzner, Carsten; Köpfer, David A; Machtens, Jan-Philipp; de Groot, Bert L; Song, Chen; Zachariae, Ulrich

    2016-07-01

    Ion channels are of universal importance for all cell types and play key roles in cellular physiology and pathology. Increased insight into their functional mechanisms is crucial to enable drug design on this important class of membrane proteins, and to enhance our understanding of some of the fundamental features of cells. This review presents the concepts behind the recently developed simulation protocol Computational Electrophysiology (CompEL), which facilitates the atomistic simulation of ion channels in action. In addition, the review provides guidelines for its application in conjunction with the molecular dynamics software package GROMACS. We first lay out the rationale for designing CompEL as a method that models the driving force for ion permeation through channels the way it is established in cells, i.e., by electrochemical ion gradients across the membrane. This is followed by an outline of its implementation and a description of key settings and parameters helpful to users wishing to set up and conduct such simulations. In recent years, key mechanistic and biophysical insights have been obtained by employing the CompEL protocol to address a wide range of questions on ion channels and permeation. We summarize these recent findings on membrane proteins, which span a spectrum from highly ion-selective, narrow channels to wide diffusion pores. Finally we discuss the future potential of CompEL in light of its limitations and strengths. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov.

  15. Computer-mediated communication preferences predict biobehavioral measures of social-emotional functioning.

    PubMed

    Babkirk, Sarah; Luehring-Jones, Peter; Dennis-Tiwary, Tracy A

    2016-12-01

    The use of computer-mediated communication (CMC) as a form of social interaction has become increasingly prevalent, yet few studies examine individual differences that may shed light on implications of CMC for adjustment. The current study examined neurocognitive individual differences associated with preferences to use technology in relation to social-emotional outcomes. In Study 1 (N = 91), a self-report measure, the Social Media Communication Questionnaire (SMCQ), was evaluated as an assessment of preferences for communicating positive and negative emotions on a scale ranging from purely via CMC to purely face-to-face. In Study 2, SMCQ preferences were examined in relation to event-related potentials (ERPs) associated with early emotional attention capture and reactivity (the frontal N1) and later sustained emotional processing and regulation (the late positive potential (LPP)). Electroencephalography (EEG) was recorded while 22 participants passively viewed emotional and neutral pictures and completed an emotion regulation task with instructions to increase, decrease, or maintain their emotional responses. A greater preference for CMC was associated with reduced size of and satisfaction with social support, greater early (N1) attention capture by emotional stimuli, and reduced LPP amplitudes to unpleasant stimuli in the increase emotion regulatory task. These findings are discussed in the context of possible emotion- and social-regulatory functions of CMC.

  16. [Functional multispiral computed tomography of sound-transmitting structures in the middle ear].

    PubMed

    Bodrova, I V; Rusektskiĭ, Iu Iu; Kulakova, L A; Lopatin, A S; Ternovoĭ, S K

    2011-01-01

    The objective of this work was to estimate the potential of functional multispiral computed tomography (fMSCT) for the choice and planning of the treatment strategy and the extent of surgical intervention in the patients presenting with fibroosseous diseases of the middle ear associated with the pathologically altered mobility of the auditory ossicles. Studies with the use of MSCT and fMSCT for the examination of temporal bones in 21 patients (25 observations) provided information about normal CT anatomy of the middle ear and a basis for the development of the fMSCT protocol; moreover they allowed the range of mobility of the auditory ossicles to be determined in healthy subjects and patients with middle ear disorders. It is concluded that fMSCT of temporal bones may be recommended to patients suffering otosclerosis, tympanosclerosis, and adhesive otitis media. The use of this technique improves the accuracy of diagnosis and facilitates the choice and planning of the treatment strategy and the extent of surgical intervention in the patients presenting with middle ear diseases.

  17. Cognition and control in schizophrenia: a computational model of dopamine and prefrontal function.

    PubMed

    Braver, T S; Barch, D M; Cohen, J D

    1999-08-01

    Behavioral deficits suffered by patients with schizophrenia in a wide array of cognitive domains can be conceptualized as failures of cognitive control, due to an impaired ability to internally represent, maintain, and update context information. A theory is described that postulates a single neurobiological mechanism for these disturbances, involving dysfunctional interactions between the dopamine neurotransmitter system and the prefrontal cortex. Specifically, it is hypothesized that in schizophrenia, there is increased noise in the activity of the dopamine system, leading to abnormal "gating" of information into prefrontal cortex. The theory is implemented as an explicit connectionist computational model that incorporates the roles of both dopamine and prefrontal cortex in cognitive control. A simulation is presented of behavioral performance in a version of the Continuous Performance Test specifically adapted to measure critical aspects of cognitive control function. Schizophrenia patients exhibit clear behavioral deficits on this task that reflect impairments in both the maintenance and updating of context information. The simulation results suggest that the model can successfully account for these impairments in terms of abnormal dopamine activity. This theory provides a potential point of contact between research on the neurobiological and psychological aspects of schizophrenia, by illustrating how a particular physiological disturbance might lead to precise and quantifiable consequences for behavior.

  18. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGES

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; et al

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  19. A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function

    PubMed Central

    Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.

    2015-01-01

    Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120

  20. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    SciTech Connect

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; Rambharos, R.

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.

  1. Restoring unassisted natural gait to paraplegics via functional neuromuscular stimulation: a computer simulation study.

    PubMed

    Yamaguchi, G T; Zajac, F E

    1990-09-01

    Functional neuromuscular stimulation (FNS) of paralyzed muscles has enabled spinal-cord-injured patients to regain a semblance of lower-extremity control, for example to ambulate while relying heavily on the use of walkers. Given the limitations of FNS, specifically low muscle strengths, high rates of fatigue, and a limited ability to modulate muscle excitations, it remains unclear, however, whether FNS can be developed as a practical means to control the lower extremity musculature to restore aesthetic, unsupported gait to paraplegics. A computer simulation of FNS-assisted bipedal gait shows that it is difficult, but possible to attain undisturbed, level gait at normal speeds provided the electrically-stimulated ankle plantarflexors exhibit either near-normal strengths or are augmented by an orthosis, and at least seven muscle-groups in each leg are stimulated. A combination of dynamic programming and an open-loop, trial-and-error adjustment process was used to find a suboptimal set of discretely-varying muscle stimulation patterns needed for a 3-D, 8 degree-of-freedom dynamic model to sustain a step. An ankle-foot orthosis was found to be especially useful, as it helped to stabilize the stance leg and simplified the task of controlling the foot during swing. It is believed that the process of simulating natural gait with this model will serve to highlight difficulties to be expected during laboratory and clinical trials.

  2. Evaluation of Coupled Perturbed and Density Functional Methods of Computing the Parity-Violating Energy Difference between Enantiomers

    NASA Astrophysics Data System (ADS)

    MacDermott, A. J.; Hyde, G. O.; Cohen, A. J.

    2009-03-01

    We present new coupled-perturbed Hartree-Fock (CPHF) and density functional theory (DFT) computations of the parity-violating energy difference (PVED) between enantiomers for H2O2 and H2S2. Our DFT PVED computations are the first for H2S2 and the first with the new HCTH and OLYP functionals. Like other “second generation” PVED computations, our results are an order of magnitude larger than the original “first generation” uncoupled-perturbed Hartree-Fock computations of Mason and Tranter. We offer an explanation for the dramatically larger size in terms of cancellation of contributions of opposing signs, which also explains the basis set sensitivity of the PVED, and its conformational hypersensitivity (addressed in the following paper). This paper also serves as a review of the different types of “second generation” PVED computations: we set our work in context, comparing our results with those of four other groups, and noting the good agreement between results obtained by very different methods. DFT PVEDs tend to be somewhat inflated compared to the CPHF values, but this is not a problem when only sign and order of magnitude are required. Our results with the new OLYP functional are less inflated than those with other functionals, and OLYP is also more efficient computationally. We therefore conclude that DFT computation offers a promising approach for low-cost extension to larger biosystems, especially polymers. The following two papers extend to terrestrial and extra-terrestrial amino acids respectively, and later work will extend to polymers.

  3. Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument

    PubMed Central

    Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.

    2012-01-01

    Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve

  4. Sensory processing during viewing of cinematographic material: computational modeling and functional neuroimaging.

    PubMed

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-02-15

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity

  5. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  6. Isotonic Modeling with Non-Differentiable Loss Functions with Application to Lasso Regularization.

    PubMed

    Painsky, Amichai; Rosset, Saharon

    2016-02-01

    In this paper we present an algorithmic approach for fitting isotonic models under convex, yet non-differentiable, loss functions. It is a generalization of the greedy non-regret approach proposed by Luss and Rosset (2014) for differentiable loss functions, taking into account the sub-gradiental extensions required. We prove that our suggested algorithm solves the isotonic modeling problem while maintaining favorable computational and statistical properties. As our suggested algorithm may be used for any non-differentiable loss function, we focus our interest on isotonic modeling for either regression or two-class classification with appropriate log-likelihood loss and lasso penalty on the fitted values. This combination allows us to maintain the non-parametric nature of isotonic modeling, while controlling model complexity through regularization. We demonstrate the efficiency and usefulness of this approach on both synthetic and real world data. An implementation of our suggested solution is publicly available from the first author's website (https://sites.google.com/site/amichaipainsky/software).

  7. The application of computer assisted technologies (CAT) in the rehabilitation of cognitive functions in psychiatric disorders of childhood and adolescence.

    PubMed

    Srebnicki, Tomasz; Bryńska, Anita

    2016-01-01

    First applications of computer-assisted technologies (CAT) in the rehabilitation of cognitive deficits, including child and adolescent psychiatric disorders date back to the 80's last century. Recent developments in computer technologies, wide access to the Internet and vast expansion of electronic devices resulted in dynamic increase in therapeutic software as well as supporting devices. The aim of computer assisted technologies is the improvement in the comfort and quality of life as well as the rehabilitation of impaired functions. The goal of the article is the presentation of most common computer-assisted technologies used in the therapy of children and adolescents with cognitive deficits as well as the literature review of their effectiveness including the challenges and limitations in regard to the implementation of such interventions. PMID:27556116

  8. Older Children and Adolescents with High-Functioning Autism Spectrum Disorders Can Comprehend Verbal Irony in Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Glenwright, Melanie; Agbayewa, Abiola S.

    2012-01-01

    We compared the comprehension of verbal irony presented in computer-mediated conversations for older children and adolescents with high-functioning autism spectrum disorders (HFASD) and typically developing (TD) controls. We also determined whether participants' interpretations of irony were affected by the relationship between characters in the…

  9. Content Range and Precision of a Computer Adaptive Test of Upper Extremity Function for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.

    2011-01-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…

  10. Density Functional Computations and Mass Spectrometric Measurements. Can this Coupling Enlarge the Knowledge of Gas-Phase Chemistry?

    NASA Astrophysics Data System (ADS)

    Marino, T.; Russo, N.; Sicilia, E.; Toscano, M.; Mineva, T.

    A series of gas-phase properties of the systems has been investigated by using different exchange-correlation potentials and basis sets of increasing size in the framework of Density Functional theory with the aim to determine a strategy able to give reliable results with reasonable computational efforts.

  11. Utilization of high resolution computed tomography to visualize the three dimensional structure and function of plant vasculature

    Technology Transfer Automated Retrieval System (TEKTRAN)

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...

  12. Investigating the Potential of Computer Environments for the Teaching and Learning of Functions: A Double Analysis from Two Research Traditions

    ERIC Educational Resources Information Center

    Lagrange, Jean-Baptiste; Psycharis, Giorgos

    2014-01-01

    The general goal of this paper is to explore the potential of computer environments for the teaching and learning of functions. To address this, different theoretical frameworks and corresponding research traditions are available. In this study, we aim to network different frameworks by following a "double analysis" method to analyse two…

  13. Differential Item Functioning (DIF) Analysis of Computation, Word Problem and Geometry Questions across Gender and SES Groups.

    ERIC Educational Resources Information Center

    Berberoglu, Giray

    1995-01-01

    Item characteristic curves were compared across gender and socioeconomic status (SES) groups for the university entrance mathematics examination in Turkey to see if any group had an advantage in solving computation, word-problem, or geometry questions. Differential item functioning was found, and patterns are discussed. (SLD)

  14. INTERP3: A computer routine for linear interpolation of trivariate functions defined by nondistinct unequally spaced variables

    NASA Technical Reports Server (NTRS)

    Hill, D. C.; Morris, S. J., Jr.

    1979-01-01

    A report on the computer routine INTERP3 is presented. The routine is designed to linearly interpolate a variable which is a function of three independent variables. The variables within the parameter arrays do not have to be distinct, or equally spaced, and the array variables can be in increasing or decreasing order.

  15. Implementation of the AES as a Hash Function for Confirming the Identity of Software on a Computer System

    SciTech Connect

    Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.; Mileson, Nicholas D.

    2003-01-20

    This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5), and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.

  16. On One Unusual Method of Computation of Limits of Rational Functions in the Program Mathematica[R

    ERIC Educational Resources Information Center

    Hora, Jaroslav; Pech, Pavel

    2005-01-01

    Computing limits of functions is a traditional part of mathematical analysis which is very difficult for students. Now an algorithm for the elimination of quantifiers in the field of real numbers is implemented in the program Mathematica. This offers a non-traditional view on this classical theme. (Contains 1 table.)

  17. Computational and functional analyses of a small-molecule binding site in ROMK.

    PubMed

    Swale, Daniel R; Sheehan, Jonathan H; Banerjee, Sreedatta; Husni, Afeef S; Nguyen, Thuy T; Meiler, Jens; Denton, Jerod S

    2015-03-10

    The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val(168) or Asn(171) in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321

  18. Computational and Functional Analyses of a Small-Molecule Binding Site in ROMK

    PubMed Central

    Swale, Daniel R.; Sheehan, Jonathan H.; Banerjee, Sreedatta; Husni, Afeef S.; Nguyen, Thuy T.; Meiler, Jens; Denton, Jerod S.

    2015-01-01

    The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val168 or Asn171 in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321

  19. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  20. Using High Resolution Computed Tomography to Visualize the Three Dimensional Structure and Function of Plant Vasculature

    PubMed Central

    McElrone, Andrew J.; Choat, Brendan; Parkinson, Dilworth Y.; MacDowell, Alastair A.; Brodersen, Craig R.

    2013-01-01

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause

  1. Bayesian estimation of the hemodynamic response function in functional MRI

    NASA Astrophysics Data System (ADS)

    Marrelec, G.; Benali, H.; Ciuciu, P.; Poline, J.-B.

    2002-05-01

    Functional MRI (fMRI) is a recent, non-invasive technique allowing for the evolution of brain processes to be dynamically followed in various cognitive or behavioral tasks. In BOLD fMRI, what is actually measured is only indirectly related to neuronal activity through a process that is still under investigation. A convenient way to analyze BOLD fMRI data consists of considering the whole brain as a system characterized by a transfer response function, called the Hemodynamic Response Function (HRF). Precise and robust estimation of the HRF has not been achieved yet: parametric methods tend to be robust but require too strong constraints on the shape of the HRF, whereas non-parametric models are not reliable since the problem is badly conditioned. We therefore propose a full Bayesian, non-parametric method that makes use of basic but relevant a priori knowledge about the underlying physiological process to make robust inference about the HRF. We show that this model is very robust to decreasing signal-to-noise ratio and to the actual noise sampling distribution. We finally apply the method to real data, revealing a wide variety of HRF shapes.

  2. Efficient computation of the angularly resolved chord length distributions and lineal path functions in large microstructure datasets

    NASA Astrophysics Data System (ADS)

    Turner, David M.; Niezgoda, Stephen R.; Kalidindi, Surya R.

    2016-10-01

    Chord length distributions (CLDs) and lineal path functions (LPFs) have been successfully utilized in prior literature as measures of the size and shape distributions of the important microscale constituents in the material system. Typically, these functions are parameterized only by line lengths, and thus calculated and derived independent of the angular orientation of the chord or line segment. We describe in this paper computationally efficient methods for estimating chord length distributions and lineal path functions for 2D (two dimensional) and 3D microstructure images defined on any number of arbitrary chord orientations. These so called fully angularly resolved distributions can be computed for over 1000 orientations on large microstructure images (5003 voxels) in minutes on modest hardware. We present these methods as new tools for characterizing microstructures in a statistically meaningful way.

  3. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  4. Functional Assessment for Human-Computer Interaction: A Method for Quantifying Physical Functional Capabilities for Information Technology Users

    ERIC Educational Resources Information Center

    Price, Kathleen J.

    2011-01-01

    The use of information technology is a vital part of everyday life, but for a person with functional impairments, technology interaction may be difficult at best. Information technology is commonly designed to meet the needs of a theoretical "normal" user. However, there is no such thing as a "normal" user. A user's capabilities will vary over…

  5. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1983-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055

  6. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  7. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    NASA Astrophysics Data System (ADS)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  8. Evaluating the Appropriateness of a New Computer-Administered Measure of Adaptive Function for Children and Youth with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng

    2016-01-01

    The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…

  9. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    PubMed

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-01

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  10. The use of computer graphic techniques for the determination of ventricular function.

    NASA Technical Reports Server (NTRS)

    Sandler, H.; Rasmussen, D.

    1972-01-01

    Description of computer techniques employed to increase the speed, accuracy, reliability, and scope of angiocardiographic analyses determining human heart dimensions. Chamber margins are traced with a Calma 303 digitizer from projections of the angiographic films. The digitized margins of the ventricular images are filed in a computer for subsequent analysis. The margins can be displayed on the television screen of a graphics unit for individual study or they can be viewed in real time (or at any selected speed) to study dynamic changes in the chamber outline. The construction of three dimensional images of the ventricle is described.

  11. Real-time functional magnetic imaging-brain-computer interface and virtual reality promising tools for the treatment of pedophilia.

    PubMed

    Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels

    2011-01-01

    This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed.

  12. Computation of dynamical correlation functions for many-fermion systems with auxiliary-field quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Vitali, Ettore; Shi, Hao; Qin, Mingpu; Zhang, Shiwei

    2016-08-01

    We address the calculation of dynamical correlation functions for many fermion systems at zero temperature, using the auxiliary-field quantum Monte Carlo method. The two-dimensional Hubbard hamiltonian is used as a model system. Although most of the calculations performed here are for cases where the sign problem is absent, the discussions are kept general for applications to physical problems when the sign problem does arise. We study the use of twisted boundary conditions to improve the extrapolation of the results to the thermodynamic limit. A strategy is proposed to drastically reduce finite size effects relying on a minimization among the twist angles. This approach is demonstrated by computing the charge gap at half filling. We obtain accurate results showing the scaling of the gap with the interaction strength U in two dimensions, connecting to the scaling of the unrestricted Hartree-Fock method at small U and Bethe ansatz exact result in one dimension at large U . An alternative algorithm is then proposed to compute dynamical Green functions and correlation functions which explicitly varies the number of particles during the random walks in the manifold of Slater determinants. In dilute systems, such as ultracold Fermi gases, this algorithm enables calculations with much more favorable complexity, with computational cost proportional to basis size or the number of lattice sites.

  13. Variability in Reading Ability Gains as a Function of Computer-Assisted Instruction Method of Presentation

    ERIC Educational Resources Information Center

    Johnson, Erin Phinney; Perry, Justin; Shamir, Haya

    2010-01-01

    This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…

  14. Discourse Functions and Vocabulary Use in English Language Learners' Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Rabab'ah, Ghaleb

    2013-01-01

    This study explores the discourse generated by English as a foreign language (EFL) learners using synchronous computer-mediated communication (CMC) as an approach to help English language learners to create social interaction in the classroom. It investigates the impact of synchronous CMC mode on the quantity of total words, lexical range and…

  15. Computational insights into function and inhibition of fatty acid amide hydrolase.

    PubMed

    Palermo, Giulia; Rothlisberger, Ursula; Cavalli, Andrea; De Vivo, Marco

    2015-02-16

    The Fatty Acid Amide Hydrolase (FAAH) enzyme is a membrane-bound serine hydrolase responsible for the deactivating hydrolysis of a family of naturally occurring fatty acid amides. FAAH is a critical enzyme of the endocannabinoid system, being mainly responsible for regulating the level of its main cannabinoid substrate anandamide. For this reason, pharmacological inhibition of FAAH, which increases the level of endogenous anandamide, is a promising strategy to cure a variety of diseases including pain, inflammation, and cancer. Much structural, mutagenesis, and kinetic data on FAAH has been generated over the last couple of decades. This has prompted several informative computational investigations to elucidate, at the atomic-level, mechanistic details on catalysis and inhibition of this pharmaceutically relevant enzyme. Here, we review how these computational studies - based on classical molecular dynamics, full quantum mechanics, and hybrid QM/MM methods - have clarified the binding and reactivity of some relevant substrates and inhibitors of FAAH. We also discuss the experimental implications of these computational insights, which have provided a thoughtful elucidation of the complex physical and chemical steps of the enzymatic mechanism of FAAH. Finally, we discuss how computations have been helpful for building structure-activity relationships of potent FAAH inhibitors. PMID:25240419

  16. Integrating computational modeling and functional assays to decipher the structure-function relationship of influenza virus PB1 protein

    PubMed Central

    Li, Chunfeng; Wu, Aiping; Peng, Yousong; Wang, Jingfeng; Guo, Yang; Chen, Zhigao; Zhang, Hong; Wang, Yongqiang; Dong, Jiuhong; Wang, Lulan; Qin, F. Xiao-Feng; Cheng, Genhong; Deng, Tao; Jiang, Taijiao

    2014-01-01

    The influenza virus PB1 protein is the core subunit of the heterotrimeric polymerase complex (PA, PB1 and PB2) in which PB1 is responsible for catalyzing RNA polymerization and binding to the viral RNA promoter. Among the three subunits, PB1 is the least known subunit so far in terms of its structural information. In this work, by integrating template-based structural modeling approach with all known sequence and functional information about the PB1 protein, we constructed a modeled structure of PB1. Based on this model, we performed mutagenesis analysis for the key residues that constitute the RNA template binding and catalytic (TBC) channel in an RNP reconstitution system. The results correlated well with the model and further identified new residues of PB1 that are critical for RNA synthesis. Moreover, we derived 5 peptides from the sequence of PB1 that form the TBC channel and 4 of them can inhibit the viral RNA polymerase activity. Interestingly, we found that one of them named PB1(491–515) can inhibit influenza virus replication by disrupting viral RNA promoter binding activity of polymerase. Therefore, this study has not only deepened our understanding of structure-function relationship of PB1, but also promoted the development of novel therapeutics against influenza virus. PMID:25424584

  17. Head sinuses, melon, and jaws of bottlenose dolphins, Tursiops truncatus, observed with computed tomography structural and single photon emission computed tomography functional imaging

    NASA Astrophysics Data System (ADS)

    Ridgway, Sam; Houser, Dorian; Finneran, James J.; Carder, Don; van Bonn, William; Smith, Cynthia; Hoh, Carl; Corbeil, Jacqueline; Mattrey, Robert

    2003-04-01

    The head sinuses, melon, and lower jaws of dolphins have been studied extensively with various methods including radiography, chemical analysis, and imaging of dead specimens. Here we report the first structural and functional imaging of live dolphins. Two animals were imaged, one male and one female. Computed tomography (CT) revealed extensive air cavities posterior and medial to the ear as well as between the ear and sound-producing nasal structures. Single photon emission computed tomography (SPECT) employing 50 mCi of the intravenously injected ligand technetium [Tc-99m] biscisate (Neurolite) revealed extensive and uptake in the core of the melon as well as near the pan bone area of the lower jaw. Count density on SPECT images was four times greater in melon as in the surrounding tissue and blubber layer suggesting that the melon is an active rather than a passive tissue. Since the dolphin temporal bone is not attached to the skull except by fibrous suspensions, the air cavities medial and posterior to the ear as well as the abutment of the temporal bone, to the acoustic fat bodies of each lower jaw, should be considered in modeling the mechanism of sound transmission from the environment to the dolphin ear.

  18. High-Throughput Computational Design of Advanced Functional Materials: Topological Insulators and Two-Dimensional Electron Gas Systems

    NASA Astrophysics Data System (ADS)

    Yang, Kesong

    As a rapidly growing area of materials science, high-throughput (HT) computational materials design is playing a crucial role in accelerating the discovery and development of novel functional materials. In this presentation, I will first introduce the strategy of HT computational materials design, and take the HT discovery of topological insulators (TIs) as a practical example to show the usage of such an approach. Topological insulators are one of the most studied classes of novel materials because of their great potential for applications ranging from spintronics to quantum computers. Here I will show that, by defining a reliable and accessible descriptor, which represents the topological robustness or feasibility of the candidate, and by searching the quantum materials repository aflowlib.org, we have automatically discovered 28 TIs (some of them already known) in five different symmetry families. Next, I will talk about our recent research work on the HT computational design of the perovskite-based two-dimensional electron gas (2DEG) systems. The 2DEG formed on the perovskite oxide heterostructure (HS) has potential applications in next-generation nanoelectronic devices. In order to achieve practical implementation of the 2DEG in the device design, desired physical properties such as high charge carrier density and mobility are necessary. Here I show that, using the same strategy with the HT discovery of TIs, by introducing a series of combinatorial descriptors, we have successfully identified a series of candidate 2DEG systems based on the perovskite oxides. This work provides another exemplar of applying HT computational design approach for the discovery of advanced functional materials.

  19. A Computation of the Frequency Dependent Dielectric Function for Energetic Materials

    NASA Astrophysics Data System (ADS)

    Zwitter, D. E.; Kuklja, M. M.; Kunz, A. B.

    1999-06-01

    The imaginary part of the dielectric function as a function of frequency is calculated for the solids RDX, TATB, ADN, and PETN. Calculations have been performed including the effects of isotropic and uniaxial pressure. Simple lattice defects are included in some of the calculations.

  20. Computer analysis of protein functional sites projection on exon structure of genes in Metazoa

    PubMed Central

    2015-01-01

    Background Study of the relationship between the structural and functional organization of proteins and their coding genes is necessary for an understanding of the evolution of molecular systems and can provide new knowledge for many applications for designing proteins with improved medical and biological properties. It is well known that the functional properties of proteins are determined by their functional sites. Functional sites are usually represented by a small number of amino acid residues that are distantly located from each other in the amino acid sequence. They are highly conserved within their functional group and vary significantly in structure between such groups. According to this facts analysis of the general properties of the structural organization of the functional sites at the protein level and, at the level of exon-intron structure of the coding gene is still an actual problem. Results One approach to this analysis is the projection of amino acid residue positions of the functional sites along with the exon boundaries to the gene structure. In this paper, we examined the discontinuity of the functional sites in the exon-intron structure of genes and the distribution of lengths and phases of the functional site encoding exons in vertebrate genes. We have shown that the DNA fragments coding the functional sites were in the same exons, or in close exons. The observed tendency to cluster the exons that code functional sites which could be considered as the unit of protein evolution. We studied the characteristics of the structure of the exon boundaries that code, and do not code, functional sites in 11 Metazoa species. This is accompanied by a reduced frequency of intercodon gaps (phase 0) in exons encoding the amino acid residue functional site, which may be evidence of the existence of evolutionary limitations to the exon shuffling. Conclusions These results characterize the features of the coding exon-intron structure that affect the

  1. Cosmic Reionization on Computers: The Faint End of the Galaxy Luminosity Function

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.

    2016-07-01

    Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions at z≳ 6. A commonly used Schechter function approximation with the magnitude cut at {M}{{cut}}˜ -13 provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut {M}{{cut}} is found to vary between -12 and -14 with a mild redshift dependence. An analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.

  2. Cosmic reionization on computers: The faint end of the galaxy luminosity function

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-07-01

    Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less

  3. Towards a fully automated computation of RG functions for the three-dimensional O(N) vector model: parametrizing amplitudes

    NASA Astrophysics Data System (ADS)

    Guida, Riccardo; Ribeca, Paolo

    2006-02-01

    Within the framework of field-theoretical description of second-order phase transitions via the three-dimensional O(N) vector model, accurate predictions for critical exponents can be obtained from (resummation of) the perturbative series of renormalization-group functions, which are in turn derived—following Parisi's approach—from the expansions of appropriate field correlators evaluated at zero external momenta. Such a technique was fully exploited 30 years ago in two seminal works of Baker, Nickel, Green and Meiron, which led to the knowledge of the β-function up to the six-loop level; they succeeded in obtaining a precise numerical evaluation of all needed Feynman amplitudes in momentum space by lowering the dimensionalities of each integration with a cleverly arranged set of computational simplifications. In fact, extending this computation is not straightforward, due both to the factorial proliferation of relevant diagrams and the increasing dimensionality of their associated integrals; in any case, this task can be reasonably carried on only in the framework of an automated environment. On the road towards the creation of such an environment, we here show how a strategy closely inspired by that of Nickel and co-workers can be stated in algorithmic form, and successfully implemented on a computer. As an application, we plot the minimized distributions of residual integrations for the sets of diagrams needed to obtain RG functions to the full seven-loop level; they represent a good evaluation of the computational effort which will be required to improve the currently available estimates of critical exponents.

  4. Computing diffuse reflection from particulate planetary surface with a new function.

    PubMed

    Wolff, M

    1981-07-15

    An equation is derived to compute the amount of diffuse light reflected by a particulate surface such as on Mars or an asteroid. The method traces the paths of rays within an ensemble of randomly shaped grains and finds the eventual probability of emission. The amount of diffuse, unpolarized emitted light is obtained in terms of the real index of refraction, the imaginary index, and the average diameter of particles making up the surface. The equation is used to compute the empirical rule for obtaining the planetary albedo from the slope of its polarization curve. Accuracy of the equation, estimated at +/-4%, seems justified because of quantitative agreement with experimental measures of the empirical rule. It is also shown that the equation can be applied to bubble-enclosing surfaces such as volcanic foams. Results for the indices of the moon, Mars, Io, and Europa are obtained and compared with other data.

  5. Using Speech Recognition to Enhance the Tongue Drive System Functionality in Computer Access

    PubMed Central

    Huo, Xueliang; Ghovanloo, Maysam

    2013-01-01

    Tongue Drive System (TDS) is a wireless tongue operated assistive technology (AT), which can enable people with severe physical disabilities to access computers and drive powered wheelchairs using their volitional tongue movements. TDS offers six discrete commands, simultaneously available to the users, for pointing and typing as a substitute for mouse and keyboard in computer access, respectively. To enhance the TDS performance in typing, we have added a microphone, an audio codec, and a wireless audio link to its readily available 3-axial magnetic sensor array, and combined it with a commercially available speech recognition software, the Dragon Naturally Speaking, which is regarded as one of the most efficient ways for text entry. Our preliminary evaluations indicate that the combined TDS and speech recognition technologies can provide end users with significantly higher performance than using each technology alone, particularly in completing tasks that require both pointing and text entry, such as web surfing. PMID:22255801

  6. Substrate tunnels in enzymes: structure-function relationships and computational methodology.

    PubMed

    Kingsley, Laura J; Lill, Markus A

    2015-04-01

    In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein, and, in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years because of implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with buried active sites and to provide a brief summary of the computational tools used to identify and evaluate these tunnels.

  7. Meta-Analysis of Diagnostic Performance of Coronary Computed Tomography Angiography, Computed Tomography Perfusion, and Computed Tomography-Fractional Flow Reserve in Functional Myocardial Ischemia Assessment Versus Invasive Fractional Flow Reserve.

    PubMed

    Gonzalez, Jorge A; Lipinski, Michael J; Flors, Lucia; Shaw, Peter W; Kramer, Christopher M; Salerno, Michael

    2015-11-01

    We sought to compare the diagnostic performance of coronary computed tomography angiography (CCTA), computed tomography perfusion (CTP), and computed tomography (CT)-fractional flow reserve (FFR) for assessing the functional significance of coronary stenosis as defined by invasive FFR in patients with known or suspected coronary artery disease (CAD). CCTA has proved clinically useful for excluding obstructive CAD because of its high sensitivity and negative predictive value (NPV); however, the ability of CTA to identify functionally significant CAD has remained challenging. We searched PubMed/Medline for studies evaluating CCTA, CTP, or CT-FFR for the noninvasive detection of obstructive CAD compared with catheter-derived FFR as the reference standard. Pooled sensitivity, specificity, PPV, NPV, likelihood ratios, and odds ratio of all diagnostic tests were assessed. Eighteen studies involving a total of 1,535 patients were included. CTA demonstrated a pooled sensitivity of 0.92, specificity 0.43, PPV of 0.56, and NPV of 0.87 on a per-patient level. CT-FFR and CTP increased the specificity to 0.72 and 0.77, respectively (p = 0.004 and p = 0.0009) resulting in higher point estimates for PPV 0.70 and 0.83, respectively. There was no improvement in the sensitivity. The CTP protocol involved more radiation (3.5 mSv CCTA vs 9.6 mSv CTP) and a higher volume of iodinated contrast (145 ml). In conclusion, CTP and CT-FFR improve the specificity of CCTA for detecting functionally significant stenosis as defined by invasive FFR on a per-patient level; both techniques could advance the ability to noninvasively detect the functional significance of coronary lesions.

  8. Noncovalent functionalization of single-walled carbon nanotubes by aromatic diisocyanate molecules: A computational study

    NASA Astrophysics Data System (ADS)

    Goclon, Jakub; Kozlowska, Mariana; Rodziewicz, Pawel

    2014-04-01

    We investigate the noncovalent functionalization of metallic single-walled carbon nanotubes (SWCNT) (6,0) by 4,4‧-methylene diphenyl diisocyanate (MDI) and toluene-2,4-diisocyanate (TDI) molecules using the density functional theory (DFT) method with van der Waals dispersion correction. The obtained local minima show the dependence between the molecular arrangement of the adsorbates on SWCNT surface and their binding energies. We analyze the interplay between the π-π stacking interactions and isocyanate functional groups. For the analysis of the changes in the electronic structure we calculate the density of states (DOS) and charge density plots.

  9. Use of 4-Dimensional Computed Tomography-Based Ventilation Imaging to Correlate Lung Dose and Function With Clinical Outcomes

    SciTech Connect

    Vinogradskiy, Yevgeniy; Castillo, Richard; Castillo, Edward; Tucker, Susan L.; Liao, Zhongxing; Guerrero, Thomas; Martel, Mary K.

    2013-06-01

    Purpose: Four-dimensional computed tomography (4DCT)-based ventilation is an emerging imaging modality that can be used in the thoracic treatment planning process. The clinical benefit of using ventilation images in radiation treatment plans remains to be tested. The purpose of the current work was to test the potential benefit of using ventilation in treatment planning by evaluating whether dose to highly ventilated regions of the lung resulted in increased incidence of clinical toxicity. Methods and Materials: Pretreatment 4DCT data were used to compute pretreatment ventilation images for 96 lung cancer patients. Ventilation images were calculated using 4DCT data, deformable image registration, and a density-change based algorithm. Dose–volume and ventilation-based dose function metrics were computed for each patient. The ability of the dose–volume and ventilation-based dose–function metrics to predict for severe (grade 3+) radiation pneumonitis was assessed using logistic regression analysis, area under the curve (AUC) metrics, and bootstrap methods. Results: A specific patient example is presented that demonstrates how incorporating ventilation-based functional information can help separate patients with and without toxicity. The logistic regression significance values were all lower for the dose–function metrics (range P=.093-.250) than for their dose–volume equivalents (range, P=.331-.580). The AUC values were all greater for the dose–function metrics (range, 0.569-0.620) than for their dose–volume equivalents (range, 0.500-0.544). Bootstrap results revealed an improvement in model fit using dose–function metrics compared to dose–volume metrics that approached significance (range, P=.118-.155). Conclusions: To our knowledge, this is the first study that attempts to correlate lung dose and 4DCT ventilation-based function to thoracic toxicity after radiation therapy. Although the results were not significant at the .05 level, our data suggests

  10. Explicit Hilbert-space representations of atomic and molecular photoabsorption spectra - Computational studies of Stieltjes-Tchebycheff functions

    NASA Technical Reports Server (NTRS)

    Hermann, M. R.; Langhoff, P. W.

    1983-01-01

    Computational methods are reported for construction of discrete and continuum Schroedinger states in atoms and molecules employing explicit Hilbert space procedures familiar from bound state studies. As theoretical development, the Schroedinger problem of interest is described, the Cauchy-Lanczos bases and orthonormal polynomials used in constructing L-squared Stieltjes-Tchebycheff (ST) approximations to the discrete and continuum states are defined, and certain properties of these functions are indicated. Advantages and limitations of the ST approach to spectral studies relative to more conventional calculations are discussed, and aspects of the approach in single-channel approximations to larger molecules are described. Procedures are indicated for construction of photoejection anisotropies and for performing coupled-channel calculations employing the ST formalism. Finally, explicit descriptive intercomparisons are made of the nature and diagnostic value of ST functions with more conventional scattering functions.

  11. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  12. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  13. When can Empirical Green Functions be computed from Noise Cross-Correlations? Hints from different Geographical and Tectonic environments

    NASA Astrophysics Data System (ADS)

    Matos, Catarina; Silveira, Graça; Custódio, Susana; Domingues, Ana; Dias, Nuno; Fonseca, João F. B.; Matias, Luís; Krueger, Frank; Carrilho, Fernando

    2014-05-01

    Noise cross-correlations are now widely used to extract Green functions between station pairs. But, do all the cross-correlations routinely computed produce successful Green Functions? What is the relationship between noise recorded in a couple of stations and the cross-correlation between them? During the last decade, we have been involved in the deployment of several temporary dense broadband (BB) networks within the scope of both national projects and international collaborations. From 2000 to 2002, a pool of 8 BB stations continuously operated in the Azores in the scope of the Memorandum of Understanding COSEA (COordinated Seismic Experiment in the Azores). Thanks to the Project WILAS (West Iberia Lithosphere and Astenosphere Structure, PTDC/CTE-GIX/097946/2008) we temporarily increased the number of BB deployed in mainland Portugal to more than 50 (permanent + temporary) during the period 2010 - 2012. In 2011/12 a temporary pool of 12 seismometers continuously recorded BB data in the Madeira archipelago, as part of the DOCTAR (Deep Ocean Test Array Experiment) project. Project CV-PLUME (Investigation on the geometry and deep signature of the Cape Verde mantle plume, PTDC/CTE-GIN/64330/2006) covered the archipelago of Cape Verde, North Atlantic, with 40 temporary BB stations in 2007/08. Project MOZART (Mozambique African Rift Tomography, PTDC/CTE-GIX/103249/2008), covered Mozambique, East Africa, with 30 temporary BB stations in the period 2011 - 2013. These networks, located in very distinct geographical and tectonic environments, offer an interesting opportunity to study seasonal and spatial variations of noise sources and their impact on Empirical Green functions computed from noise cross-correlation. Seismic noise recorded at different seismic stations is evaluated by computation of the probability density functions of power spectral density (PSD) of continuous data. To assess seasonal variations of ambient noise sources in frequency content, time-series of

  14. A first principle approach using Maximally Localized Wannier Functions for computing and understanding elasto-optic reponse

    NASA Astrophysics Data System (ADS)

    Liang, Xin; Ismail-Beigi, Sohrab

    Strain-induced changes of optical properties are of use in the design and functioning of devices that couple photons and phonons. The elasto-optic (or photo-elastic) effect describes a general materials property where strain induces a change in the dielectric tensor. Despite a number of experimental and computational works, it is fair to say that a basic physical understanding of the effect and its materials dependence is lacking: e.g., we know of no materials design rule for enhancing or suppressing elasto-optic response. Based on our previous work, we find that a real space representation, as opposed to a k-space description, is a promising way to understand this effect. We have finished the development of a method of computing the dielectric and elasto-optic tensors using Maximally Localized Wannier Functions (MLWFs). By analyzing responses to uniaxial strain, we find that both tensors respond in a localized manner to the perturbation: the dominant optical transitions are between local electronic states on nearby bonds. We describe the method, the resulting physical picture and computed results for semiconductors. This work is supported by the National Science Foundation through Grant NSF DMR-1104974.

  15. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra.

    PubMed

    Goings, Joshua J; Li, Xiaosong

    2016-06-21

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  16. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    NASA Astrophysics Data System (ADS)

    Goings, Joshua J.; Li, Xiaosong

    2016-06-01

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  17. [Covalent chloramine inhibitors of blood platelet functions: computational indices for their reactivity and antiplatelet activity].

    PubMed

    Roshchupkin, D I; Murina, M A; Sergienko, V I

    2011-01-01

    The quantum mechanics computation of the reactivities of chloramine derivatives of amino acids and taurine has been accomplished. A pair of computational indices that reflect a predisposition of alpha amino acid chloramines to chemical decay have been revealed. One of the indices was the dihedral angle for the chain of four atoms: carbons at beta- and alpha-positions, carbon of the carboxyl group, and carbonyl oxygen. The second index was the sum of partial charges for three or two carbon atoms in the chain. The amino acid chloramines with high values of the indices showed enhanced stability. Partial charges for active chlorine in known chloramines having different structures have been computed. The charges correlate with the rate constants of the reaction between chloramines and the thiol group of reduced glutathione. New derivatives of taurine chloramines have been constructed via the introduction of different substituents into the chloramine part. Among them, the amidoderivatives had the greatest charges of active chlorine (0.19-0.23). It was found in the study of the reactions of N-acetyl-N-chlorotaurine and N-propyonyl-N-chlorotaurine with amino acids and peptides possessing the thiol, thioester, or disulphide groups that the amidoderivatives manifested the thiol chemoselectivity. N-Acetyl-N-chlorotaurine and N-propionyl-N-chlorotaurine suppress the aggregation activity of blood platelets under their activation by the agonists ADP and collagen. It is not excluded that the amidoderivatives studied prevent platelet aggregation by a modification of the critical thiol group in the purine receptor P2Y12. PMID:22117450

  18. Using brain–computer interfaces to induce neural plasticity and restore function

    PubMed Central

    Grosse-Wentrup, Moritz; Mattia, Donatella; Oweiss, Karim

    2015-01-01

    Analyzing neural signals and providing feedback in realtime is one of the core characteristics of a brain–computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI technology for therapeutic purposes is increasingly gaining popularity in the BCI community. In this paper, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. We conclude with a list of open questions and recommendations for future research in this field. PMID:21436534

  19. Using brain-computer interfaces to induce neural plasticity and restore function

    NASA Astrophysics Data System (ADS)

    Grosse-Wentrup, Moritz; Mattia, Donatella; Oweiss, Karim

    2011-04-01

    Analyzing neural signals and providing feedback in realtime is one of the core characteristics of a brain-computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI technology for therapeutic purposes is increasingly gaining popularity in the BCI community. In this paper, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. We conclude with a list of open questions and recommendations for future research in this field.

  20. Accuracy and computational efficiency of real-time subspace propagation schemes for the time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman

    2016-05-01

    Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.

  1. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids.

    PubMed

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  2. Reverse energy partitioning-An efficient algorithm for computing the density of states, partition functions, and free energy of solids.

    PubMed

    Do, Hainam; Wheatley, Richard J

    2016-08-28

    A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction ("upward") into subdivisions whose integrated density of states is known. When combined with the density of states computed from the "downward" energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The method is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known. PMID:27586913

  3. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids.

    PubMed

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm.

  4. Reverse energy partitioning—An efficient algorithm for computing the density of states, partition functions, and free energy of solids

    NASA Astrophysics Data System (ADS)

    Do, Hainam; Wheatley, Richard J.

    2016-08-01

    A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction ("upward") into subdivisions whose integrated density of states is known. When combined with the density of states computed from the "downward" energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The method is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known.

  5. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  6. Single photon emission computed tomography of the heart: a functional image

    SciTech Connect

    Itti, R.; Casset, D.; Philippe, L.; Brochier, M.

    1987-01-01

    Images of radioactive tracer uptake are mainly functional images since the tracer distribution may directly be related to the regional variations in function, such as myocardial perfusion in the case of thallium-201 single photon tomography. Combination of pictures obtained in different physiological conditions (stress-rest, for instance) enhance the functional aspects of these studies. For gated cardiac blood pool images, on the contrary, labelling of the circulating blood pool using technetium-99m provides morphological pictures of the heart chambers and function can only be derived from the dynamic analysis of the image sequence recorded at the successive phases of the cardiac cycle. The technique of thick slice tomography preserves the relationship between count rates and local volumes of radioactive blood. Parametric imaging therefore applies to tomography as well as to plane projections. In the simplest case reconstruction of the extreme phases of the heart beat, end-diastole and end-systole may be sufficient. But to achieve more sophisticated functional analysis such as Fourier phase mapping, reconstruction of the whole cardiac cycle is necessary.

  7. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity.

  8. The Environmental Impacts of a Desktop Computer: Influence of Choice of Functional Unit, System Boundary and User Behaviour

    NASA Astrophysics Data System (ADS)

    Simanovska, J.; Šteina, Māra; Valters, K.; Bažbauers, G.

    2009-01-01

    The pollution prevention during the design phase of products and processes in environmental policy gains its importance over the other, more historically known principle - pollution reduction in the end-of-pipe. This approach requires prediction of potential environmental impacts to be avoided or reduced and a prioritisation of the most efficient areas for action. Currently the most appropriate method for this purpose is life cycle assessment (LCA)- a method for accounting and attributing all environmental impacts which arise during the life time of a product, starting with the production of raw materials and ending with the disposal, or recycling of the wasted product at the end of life. The LCA, however, can be misleading if the performers of the study disregard gaps of information and the limitations of the chosen methodology. During the study we researched the environmental impact of desktop computers, using a simplified LCA method - Indicators' 99, and by developing various scenarios (changing service life, user behaviour, energy supply etc). The study demonstrates that actions for improvements lie in very different areas. The study also concludes that the approach of defining functional unit must be sufficiently flexible in order to avoid discounting areas of potential actions. Therefore, with regard to computers we agree with other authors using the functional unit "one computer" but suggest not to bind this to service life or usage time, but to develop several scenarios varying these parameters. The study also demonstrates the importance of a systemic approach when assessing complex product systems - as more complex the system is, the more broad the scope for potential actions. We conclude that, regarding computers, which belong to energy using and material- intensive products, the measures to reduce environmental impacts lie not only with the producer and user of the particular product, but also with the whole national energy supply and waste management

  9. An evolutionary computational theory of prefrontal executive function in decision-making

    PubMed Central

    Koechlin, Etienne

    2014-01-01

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour. PMID:25267817

  10. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.

  11. An evolutionary computational theory of prefrontal executive function in decision-making.

    PubMed

    Koechlin, Etienne

    2014-11-01

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour.

  12. Studying the Chemistry of Cationized Triacylglycerols Using Electrospray Ionization Mass Spectrometry and Density Functional Theory Computations

    NASA Astrophysics Data System (ADS)

    Grossert, J. Stuart; Herrera, Lisandra Cubero; Ramaley, Louis; Melanson, Jeremy E.

    2014-08-01

    Analysis of triacylglycerols (TAGs), found as complex mixtures in living organisms, is typically accomplished using liquid chromatography, often coupled to mass spectrometry. TAGs, weak bases not protonated using electrospray ionization, are usually ionized by adduct formation with a cation, including those present in the solvent (e.g., Na+). There are relatively few reports on the binding of TAGs with cations or on the mechanisms by which cationized TAGs fragment. This work examines binding efficiencies, determined by mass spectrometry and computations, for the complexation of TAGs to a range of cations (Na+, Li+, K+, Ag+, NH4 +). While most cations bind to oxygen, Ag+ binding to unsaturation in the acid side chains is significant. The importance of dimer formation, [2TAG + M]+ was demonstrated using several different types of mass spectrometers. From breakdown curves, it became apparent that two or three acid side chains must be attached to glycerol for strong cationization. Possible mechanisms for fragmentation of lithiated TAGs were modeled by computations on tripropionylglycerol. Viable pathways were found for losses of neutral acids and lithium salts of acids from different positions on the glycerol moiety. Novel lactone structures were proposed for the loss of a neutral acid from one position of the glycerol moiety. These were studied further using triple-stage mass spectrometry (MS3). These lactones can account for all the major product ions in the MS3 spectra in both this work and the literature, which should allow for new insights into the challenging analytical methods needed for naturally occurring TAGs.

  13. An evolutionary computational theory of prefrontal executive function in decision-making.

    PubMed

    Koechlin, Etienne

    2014-11-01

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour. PMID:25267817

  14. Fast way to compute functional determinants of radially symmetric partial differential operators in general dimensions

    SciTech Connect

    Hur, Jin; Min, Hyunsoo

    2008-06-15

    Recently the partial-wave cutoff method was developed as a new calculational scheme for a functional determinant of quantum field theory in radial backgrounds. For the contribution given by an infinite sum of large partial waves, we derive explicitly radial-WKB series in the angular momentum cutoff for d=2, 3, 4, and 5 (d is the space-time dimension), which has uniform validity irrespectively of any specific values assumed for other parameters. Utilizing this series, precision evaluation of the renormalized functional determinant is possible with a relatively small number of low partial-wave contributions determined separately. We illustrate the power of this scheme in a numerically exact evaluation of the prefactor (expressed as a functional determinant) in the case of the false vacuum decay of 4D scalar field theory.

  15. Computational models of upper-limb motion during functional reaching tasks for application in FES-based stroke rehabilitation.

    PubMed

    Freeman, Chris; Exell, Tim; Meadmore, Katie; Hallewell, Emma; Hughes, Ann-Marie

    2015-06-01

    Functional electrical stimulation (FES) has been shown to be an effective approach to upper-limb stroke rehabilitation, where it is used to assist arm and shoulder motion. Model-based FES controllers have recently confirmed significant potential to improve accuracy of functional reaching tasks, but they typically require a reference trajectory to track. Few upper-limb FES control schemes embed a computational model of the task; however, this is critical to ensure the controller reinforces the intended movement with high accuracy. This paper derives computational motor control models of functional tasks that can be directly embedded in real-time FES control schemes, removing the need for a predefined reference trajectory. Dynamic models of the electrically stimulated arm are first derived, and constrained optimisation problems are formulated to encapsulate common activities of daily living. These are solved using iterative algorithms, and results are compared with kinematic data from 12 subjects and found to fit closely (mean fitting between 63.2% and 84.0%). The optimisation is performed iteratively using kinematic variables and hence can be transformed into an iterative learning control algorithm by replacing simulation signals with experimental data. The approach is therefore capable of controlling FES in real time to assist tasks in a manner corresponding to unimpaired natural movement. By ensuring that assistance is aligned with voluntary intention, the controller hence maximises the potential effectiveness of future stroke rehabilitation trials.

  16. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    PubMed

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular

  17. Toward reliable characterization of functional homogeneity in the human brain: preprocessing, scan duration, imaging resolution and computational space.

    PubMed

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F Xavier; Milham, Michael P

    2013-01-15

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test-retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo's TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  18. Toward reliable characterization of functional homogeneity in the human brain: Preprocessing, scan duration, imaging resolution and computational space

    PubMed Central

    Zuo, Xi-Nian; Xu, Ting; Jiang, Lili; Yang, Zhi; Cao, Xiao-Yan; He, Yong; Zang, Yu-Feng; Castellanos, F. Xavier; Milham, Michael P.

    2013-01-01

    While researchers have extensively characterized functional connectivity between brain regions, the characterization of functional homogeneity within a region of the brain connectome is in early stages of development. Several functional homogeneity measures were proposed previously, among which regional homogeneity (ReHo) was most widely used as a measure to characterize functional homogeneity of resting state fMRI (R-fMRI) signals within a small region (Zang et al., 2004). Despite a burgeoning literature on ReHo in the field of neuroimaging brain disorders, its test–retest (TRT) reliability remains unestablished. Using two sets of public R-fMRI TRT data, we systematically evaluated the ReHo’s TRT reliability and further investigated the various factors influencing its reliability and found: 1) nuisance (head motion, white matter, and cerebrospinal fluid) correction of R-fMRI time series can significantly improve the TRT reliability of ReHo while additional removal of global brain signal reduces its reliability, 2) spatial smoothing of R-fMRI time series artificially enhances ReHo intensity and influences its reliability, 3) surface-based R-fMRI computation largely improves the TRT reliability of ReHo, 4) a scan duration of 5 min can achieve reliable estimates of ReHo, and 5) fast sampling rates of R-fMRI dramatically increase the reliability of ReHo. Inspired by these findings and seeking a highly reliable approach to exploratory analysis of the human functional connectome, we established an R-fMRI pipeline to conduct ReHo computations in both 3-dimensions (volume) and 2-dimensions (surface). PMID:23085497

  19. Computer Simulation for Calculating the Second-Order Correlation Function of Classical and Quantum Light

    ERIC Educational Resources Information Center

    Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.

    2011-01-01

    We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…

  20. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Charon is a software toolkit that enables engineers to develop high-performing message-passing programs in a convenient and piecemeal fashion. Emphasis is on rapid program development and prototyping. In this report a detailed description of the functional design of the toolkit is presented. It is illustrated by the stepwise parallelization of two representative code examples.