Science.gov

Sample records for computing non-parametric function

  1. Parametric and Non-parametric methods for the periodogram analysis: Interrelations and properties of the test functions

    NASA Astrophysics Data System (ADS)

    Andronov, I. L.; Chinarova, L. L.

    Numerical comparison of the methods for periodogram analysis is carried out for the parametric modifications of the Fourier transform by Deeming T.J. (1975, Ap. Space Sci., 36, 137); Lomb N.R. (1976, Ap. Space Sci., 39, 447); Andronov I.L. (1994, Odessa Astron. Publ., 7, 49); parametric modifications based on the spline approximations of different order k and defect k by Jurkevich I. (1971, Ap. Space Sci., 13, 154; n = 0, k = 1); Marraco H.G., Muzzio J.C. (1980, P.A.S.P., 92, 700; n = 1, k = 2); Andronov I.L. (1987, Contrib. Astron. Inst. Czechoslovak. 20, 161; n = 3, k = 1); non-parametric modifications by Lafler J. and Kinman T.D. (1965, Ap.J.Suppl., 11, 216), Burke E.W., Rolland W.W. and Boy W.R. (1970, J.R.A.S.Canada, 64, 353), Deeming T.J. (1970, M.N.R.A.S., 147, 365), Renson P. (1978, As. Ap., 63, 125) and Dworetsky M.M. (1983, M.N.R.A.S., 203, 917). For some numerical models the values of the mean, variance, asymmetry and excess of the test-functions are determined, the correlations between them are discussed. Analytic estimates of the mathematical expectation of the test function for different methods and of the dispersion of the test function by Lafler and Kinman (1965) and of the parametric functions are determined. The statistical distribution of the test functions computed for fixed data and various frequencies is significantly different from that computed for various data realizations. The histogram for the non-parametric test functions is nearly symmetric for normally distributed uncorrelated data and is characterized by a distinctly negative asymmetry for noisy data with periodic components. The non-parametric test-functions may be subdivided into two groups - similar to that by Lafler and Kinman (1965) and to that by Deeming (1970). The correlation coefficients for the test-functions within each group are close to unity for large number of data. Conditions for significant influence of the phase difference between the data onto the test functions are

  2. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  3. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  4. Scaling of preferential flow in biopores by parametric or non parametric transfer functions

    NASA Astrophysics Data System (ADS)

    Zehe, E.; Hartmann, N.; Klaus, J.; Palm, J.; Schroeder, B.

    2009-04-01

    finally assign the measured hydraulic capacities to these pores. By combining this population of macropores with observed data on soil hydraulic properties we obtain a virtual reality. Flow and transport is simulated for different rainfall forcings comparing two models, Hydrus 3d and Catflow. The simulated cumulative travel depths distributions for different forcings will be linked to the cumulative depth distribution of connected flow paths. The latter describes the fraction of connected paths - where flow resistance is always below a selected threshold that links the surface to a certain critical depth. Systematic variation of the average number of macropores and their depth distributions will show whether a clear link between the simulated travel depths distributions and the depth distribution of connected paths may be identified. The third essential step is to derive a non parametric transfer function that predicts travel depth distributions of tracers and on the long term pesticides based on easy-to-assess subsurface characteristics (mainly density and depth distribution of worm burrows, soil matrix properties), initial conditions and rainfall forcing. Such a transfer function is independent of scale ? as long as we stay in the same ensemble i.e. worm population and soil properties stay the same. Shipitalo, M.J. and Butt, K.R. (1999): Occupancy and geometrical properties of Lumbricus terrestris L. burrows affecting infiltration. Pedobiologia 43:782-794 Zehe E, and Fluehler H. (2001b): Slope scale distribution of flow patterns in soil profiles. J. Hydrol. 247: 116-132.

  5. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.

    PubMed

    Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego

    2015-10-01

    Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can

  6. A non-parametric statistical test to compare clusters with applications in functional magnetic resonance imaging data.

    PubMed

    Fujita, André; Takahashi, Daniel Y; Patriota, Alexandre G; Sato, João R

    2014-12-10

    Statistical inference of functional magnetic resonance imaging (fMRI) data is an important tool in neuroscience investigation. One major hypothesis in neuroscience is that the presence or not of a psychiatric disorder can be explained by the differences in how neurons cluster in the brain. Therefore, it is of interest to verify whether the properties of the clusters change between groups of patients and controls. The usual method to show group differences in brain imaging is to carry out a voxel-wise univariate analysis for a difference between the mean group responses using an appropriate test and to assemble the resulting 'significantly different voxels' into clusters, testing again at cluster level. In this approach, of course, the primary voxel-level test is blind to any cluster structure. Direct assessments of differences between groups at the cluster level seem to be missing in brain imaging. For this reason, we introduce a novel non-parametric statistical test called analysis of cluster structure variability (ANOCVA), which statistically tests whether two or more populations are equally clustered. The proposed method allows us to compare the clustering structure of multiple groups simultaneously and also to identify features that contribute to the differential clustering. We illustrate the performance of ANOCVA through simulations and an application to an fMRI dataset composed of children with attention deficit hyperactivity disorder (ADHD) and controls. Results show that there are several differences in the clustering structure of the brain between them. Furthermore, we identify some brain regions previously not described to be involved in the ADHD pathophysiology, generating new hypotheses to be tested. The proposed method is general enough to be applied to other types of datasets, not limited to fMRI, where comparison of clustering structures is of interest. PMID:25185759

  7. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  8. Bayesian non-parametrics and the probabilistic approach to modelling

    PubMed Central

    Ghahramani, Zoubin

    2013-01-01

    Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609

  9. Lottery spending: a non-parametric analysis.

    PubMed

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  10. Lottery Spending: A Non-Parametric Analysis

    PubMed Central

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699

  11. Non-parametric transformation for data correlation and integration: From theory to practice

    SciTech Connect

    Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon

    1997-08-01

    The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.

  12. Non-parametric estimation of morphological lopsidedness

    NASA Astrophysics Data System (ADS)

    Giese, Nadine; van der Hulst, Thijs; Serra, Paolo; Oosterloo, Tom

    2016-09-01

    Asymmetries in the neutral hydrogen gas distribution and kinematics of galaxies are thought to be indicators for both gas accretion and gas removal processes. These are of fundamental importance for galaxy formation and evolution. Upcoming large blind H I surveys will provide tens of thousands of galaxies for a study of these asymmetries in a proper statistical way. Due to the large number of expected sources and the limited resolution of the majority of objects, detailed modelling is not feasible for most detections. We need fast, automatic and sensitive methods to classify these objects in an objective way. Existing non-parametric methods suffer from effects like the dependence on signal to noise, resolution and inclination. Here we show how to correctly take these effects into account and show ways to estimate the precision of the methods. We will use existing and modelled data to give an outlook on the performance expected for galaxies observed in the various sky surveys planned for e.g. WSRT/APERTIF and ASKAP.

  13. Non-parametric transient classification using adaptive wavelets

    NASA Astrophysics Data System (ADS)

    Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.

    2015-11-01

    Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.

  14. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians. PMID:19041946

  15. Non-parametric extraction of implied asset price distributions

    NASA Astrophysics Data System (ADS)

    Healy, Jerome V.; Dixon, Maurice; Read, Brian J.; Cai, Fang Fang

    2007-08-01

    We present a fully non-parametric method for extracting risk neutral densities (RNDs) from observed option prices. The aim is to obtain a continuous, smooth, monotonic, and convex pricing function that is twice differentiable. Thus, irregularities such as negative probabilities that afflict many existing RND estimation techniques are reduced. Our method employs neural networks to obtain a smoothed pricing function, and a central finite difference approximation to the second derivative to extract the required gradients. This novel technique was successfully applied to a large set of FTSE 100 daily European exercise (ESX) put options data and as an Ansatz to the corresponding set of American exercise (SEI) put options. The results of paired t-tests showed significant differences between RNDs extracted from ESX and SEI option data, reflecting the distorting impact of early exercise possibility for the latter. In particular, the results for skewness and kurtosis suggested different shapes for the RNDs implied by the two types of put options. However, both ESX and SEI data gave an unbiased estimate of the realised FTSE 100 closing prices on the options’ expiration date. We confirmed that estimates of volatility from the RNDs of both types of option were biased estimates of the realised volatility at expiration, but less so than the LIFFE tabulated at-the-money implied volatility.

  16. Probabilistic streamflow forecasting for hydroelectricity production: A comparison of two non-parametric system identification algorithms

    NASA Astrophysics Data System (ADS)

    Pande, Saket; Sharma, Ashish

    2014-05-01

    This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.

  17. Non-parametric combination and related permutation tests for neuroimaging.

    PubMed

    Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E

    2016-04-01

    In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction. Hum Brain Mapp 37:1486-1511, 2016. © 2016 Wiley Periodicals, Inc. PMID:26848101

  18. Bayesian Semi- and Non-parametric Models for Longitudinal Data with Multiple Membership Effects in R

    PubMed Central

    Savitsky, Terrance D.; Paddock, Susan M.

    2014-01-01

    We introduce growcurves for R that performs analysis of repeated measures multiple membership (MM) data. This data structure arises in studies under which an intervention is delivered to each subject through the subject's participation in a set of multiple elements that characterize the intervention. In our motivating study design under which subjects receive a group cognitive behavioral therapy (CBT) treatment, an element is a group CBT session and each subject attends multiple sessions that, together, comprise the treatment. The sets of elements, or group CBT sessions, attended by subjects will partly overlap with some of those from other subjects to induce a dependence in their responses. The growcurves package offers two alternative sets of hierarchical models: 1. Separate terms are specified for multivariate subject and MM element random effects, where the subject effects are modeled under a Dirichlet process prior to produce a semi-parametric construction; 2. A single term is employed to model joint subject-by-MM effects. A fully non-parametric dependent Dirichlet process formulation allows exploration of differences in subject responses across different MM elements. This model allows for borrowing information among subjects who express similar longitudinal trajectories for flexible estimation. growcurves deploys “estimation” functions to perform posterior sampling under a suite of prior options. An accompanying set of “plot” functions allow the user to readily extract by-subject growth curves. The design approach intends to anticipate inferential goals with tools that fully extract information from repeated measures data. Computational efficiency is achieved by performing the sampling for estimation functions using compiled C++. PMID:25400517

  19. Testing for predator dependence in predator-prey dynamics: a non-parametric approach.

    PubMed Central

    Jost, C; Ellner, S P

    2000-01-01

    The functional response is a key element in all predator-prey interactions. Although functional responses are traditionally modelled as being a function of prey density only, evidence is accumulating that predator density also has an important effect. However, much of the evidence comes from artificial experimental arenas under conditions not necessarily representative of the natural system, and neglecting the temporal dynamics of the organism (in particular the effects of prey depletion on the estimated functional response). Here we present a method that removes these limitations by reconstructing the functional response non-parametrically from predator-prey time-series data. This method is applied to data on a protozoan predator-prey interaction, and we obtain significant evidence of predator dependence in the functional response. A crucial element in this analysis is to include time-lags in the prey and predator reproduction rates, and we show that these delays improve the fit of the model significantly. Finally, we compare the non-parametrically reconstructed functional response to parametric forms, and suggest that a modified version of the Hassell-Varley predator interference model provides a simple and flexible function for theoretical investigation and applied modelling. PMID:11467423

  20. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  1. Tremor Detection Using Parametric and Non-Parametric Spectral Estimation Methods: A Comparison with Clinical Assessment

    PubMed Central

    Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.

    2016-01-01

    In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration

  2. Non-Parametric Collision Probability for Low-Velocity Encounters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2007-01-01

    An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.

  3. AWclust: point-and-click software for non-parametric population structure analysis

    PubMed Central

    Gao, Xiaoyi; Starmer, Joshua D

    2008-01-01

    Background Population structure analysis is important to genetic association studies and evolutionary investigations. Parametric approaches, e.g. STRUCTURE and L-POP, usually assume Hardy-Weinberg equilibrium (HWE) and linkage equilibrium among loci in sample population individuals. However, the assumptions may not hold and allele frequency estimation may not be accurate in some data sets. The improved version of STRUCTURE (version 2.1) can incorporate linkage information among loci but is still sensitive to high background linkage disequilibrium. Nowadays, large-scale single nucleotide polymorphisms (SNPs) are becoming popular in genetic studies. Therefore, it is imperative to have software that makes full use of these genetic data to generate inference even when model assumptions do not hold or allele frequency estimation suffers from high variation. Results We have developed point-and-click software for non-parametric population structure analysis distributed as an R package. The software takes advantage of the large number of SNPs available to categorize individuals into ethnically similar clusters and it does not require assumptions about population models. Nor does it estimate allele frequencies. Moreover, this software can also infer the optimal number of populations. Conclusion Our software tool employs non-parametric approaches to assign individuals to clusters using SNPs. It provides efficient computation and an intuitive way for researchers to explore ethnic relationships among individuals. It can be complementary to parametric approaches in population structure analysis. PMID:18237431

  4. Non-Parametric Bayesian Registration (NParBR) of Body Tumors in DCE-MRI Data.

    PubMed

    Pilutti, David; Strumia, Maddalena; Buchert, Martin; Hadjidemetriou, Stathis

    2016-04-01

    The identification of tumors in the internal organs of chest, abdomen, and pelvis anatomic regions can be performed with the analysis of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data. The contrast agent is accumulated differently by pathologic and healthy tissues and that results in a temporally varying contrast in an image series. The internal organs are also subject to potentially extensive movements mainly due to breathing, heart beat, and peristalsis. This contributes to making the analysis of DCE-MRI datasets challenging as well as time consuming. To address this problem we propose a novel pairwise non-rigid registration method with a Non-Parametric Bayesian Registration (NParBR) formulation. The NParBR method uses a Bayesian formulation that assumes a model for the effect of the distortion on the joint intensity statistics, a non-parametric prior for the restored statistics, and also applies a spatial regularization for the estimated registration with Gaussian filtering. A minimally biased intra-dataset atlas is computed for each dataset and used as reference for the registration of the time series. The time series registration method has been tested with 20 datasets of liver, lungs, intestines, and prostate. It has been compared to the B-Splines and to the SyN methods with results that demonstrate that the proposed method improves both accuracy and efficiency. PMID:26672032

  5. Robust non-parametric one-sample tests for the analysis of recurrent events.

    PubMed

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. PMID:21170908

  6. Point matching based on non-parametric model

    NASA Astrophysics Data System (ADS)

    Liu, Renfeng; Zhang, Cong; Tian, Jinwen

    2015-12-01

    Establishing reliable feature correspondence between two images is a fundamental problem in vision analysis and it is a critical prerequisite in a wide range of applications including structure-from-motion, 3D reconstruction, tracking, image retrieval, registration, and object recognition. The feature could be point, line, curve or surface, among which the point feature is primary and is the foundation of all features. Numerous techniques related to point matching have been proposed within a rich and extensive literature, which are typically studied under rigid/affine or non-rigid motion, corresponding to parametric and non-parametric models for the underlying image relations. In this paper, we provide a review of our previous work on point matching, focusing on nonparametric models. We also make an experimental comparison of the introduced methods, and discuss their advantages and disadvantages as well.

  7. Binary Classifier Calibration Using a Bayesian Non-Parametric Approach

    PubMed Central

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2015-01-01

    Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in Data mining. This paper presents two new non-parametric methods for calibrating outputs of binary classification models: a method based on the Bayes optimal selection and a method based on the Bayesian model averaging. The advantage of these methods is that they are independent of the algorithm used to learn a predictive model, and they can be applied in a post-processing step, after the model is learned. This makes them applicable to a wide variety of machine learning models and methods. These calibration methods, as well as other methods, are tested on a variety of datasets in terms of both discrimination and calibration performance. The results show the methods either outperform or are comparable in performance to the state-of-the-art calibration methods. PMID:26613068

  8. A non-parametric segmentation methodology for oral videocapillaroscopic images.

    PubMed

    Bellavia, Fabio; Cacioppo, Antonino; Lupaşcu, Carmen Alina; Messina, Pietro; Scardina, Giuseppe; Tegolo, Domenico; Valenti, Cesare

    2014-05-01

    We aim to describe a new non-parametric methodology to support the clinician during the diagnostic process of oral videocapillaroscopy to evaluate peripheral microcirculation. Our methodology, mainly based on wavelet analysis and mathematical morphology to preprocess the images, segments them by minimizing the within-class luminosity variance of both capillaries and background. Experiments were carried out on a set of real microphotographs to validate this approach versus handmade segmentations provided by physicians. By using a leave-one-patient-out approach, we pointed out that our methodology is robust, according to precision-recall criteria (average precision and recall are equal to 0.924 and 0.923, respectively) and it acts as a physician in terms of the Jaccard index (mean and standard deviation equal to 0.858 and 0.064, respectively). PMID:24657094

  9. A Bayesian non-parametric Potts model with application to pre-surgical FMRI data.

    PubMed

    Johnson, Timothy D; Liu, Zhuqing; Bartsch, Andreas J; Nichols, Thomas E

    2013-08-01

    The Potts model has enjoyed much success as a prior model for image segmentation. Given the individual classes in the model, the data are typically modeled as Gaussian random variates or as random variates from some other parametric distribution. In this article, we present a non-parametric Potts model and apply it to a functional magnetic resonance imaging study for the pre-surgical assessment of peritumoral brain activation. In our model, we assume that the Z-score image from a patient can be segmented into activated, deactivated, and null classes, or states. Conditional on the class, or state, the Z-scores are assumed to come from some generic distribution which we model non-parametrically using a mixture of Dirichlet process priors within the Bayesian framework. The posterior distribution of the model parameters is estimated with a Markov chain Monte Carlo algorithm, and Bayesian decision theory is used to make the final classifications. Our Potts prior model includes two parameters, the standard spatial regularization parameter and a parameter that can be interpreted as the a priori probability that each voxel belongs to the null, or background state, conditional on the lack of spatial regularization. We assume that both of these parameters are unknown, and jointly estimate them along with other model parameters. We show through simulation studies that our model performs on par, in terms of posterior expected loss, with parametric Potts models when the parametric model is correctly specified and outperforms parametric models when the parametric model in misspecified. PMID:22627277

  10. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  11. Non-parametric star formation histories for four dwarf spheroidal galaxies of the Local Group

    NASA Astrophysics Data System (ADS)

    Hernandez, X.; Gilmore, Gerard; Valls-Gabaud, David

    2000-10-01

    We use recent Hubble Space Telescope colour-magnitude diagrams of the resolved stellar populations of a sample of local dSph galaxies (Carina, Leo I, Leo II and Ursa Minor) to infer the star formation histories of these systems, SFR(t). Applying a new variational calculus maximum likelihood method, which includes a full Bayesian analysis and allows a non-parametric estimate of the function one is solving for, we infer the star formation histories of the systems studied. This method has the advantage of yielding an objective answer, as one need not assume a priori the form of the function one is trying to recover. The results are checked independently using Saha's W statistic. The total luminosities of the systems are used to normalize the results into physical units and derive SN type II rates. We derive the luminosity-weighted mean star formation history of this sample of galaxies.

  12. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2015-04-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.

  13. Non-parametric reconstruction of cosmological matter perturbations

    NASA Astrophysics Data System (ADS)

    González, J. E.; Alcaniz, J. S.; Carvalho, J. C.

    2016-04-01

    Perturbative quantities, such as the growth rate (f) and index (γ), are powerful tools to distinguish different dark energy models or modified gravity theories even if they produce the same cosmic expansion history. In this work, without any assumption about the dynamics of the Universe, we apply a non-parametric method to current measurements of the expansion rate H(z) from cosmic chronometers and high-z quasar data and reconstruct the growth factor and rate of linearised density perturbations in the non-relativistic matter component. Assuming realistic values for the matter density parameter Ωm0, as provided by current CMB experiments, we also reconstruct the evolution of the growth index γ with redshift. We show that the reconstruction of current H(z) data constrains the growth index to γ=0.56 ± 0.12 (2σ) at z = 0.09, which is in full agreement with the prediction of the ΛCDM model and some of its extensions.

  14. Non-parametric and least squares Langley plot methods

    NASA Astrophysics Data System (ADS)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  15. A non-parametric probabilistic model for soil-structure interaction

    NASA Astrophysics Data System (ADS)

    Laudarin, F.; Desceliers, C.; Bonnet, G.; Argoul, P.

    2013-07-01

    The paper investigates the effect of soil-structure interaction on the dynamic response of structures. A non-parametric probabilistic formulation for the modelling of an uncertain soil impedance is used to account for the usual lack of information on soil properties. Such a probabilistic model introduces the physical coupling stemming from the soil heterogeneity around the foundation. Considering this effect, even a symmetrical building displays a torsional motion when submitted to earthquake loading. The study focuses on a multi-story building modeled by using equivalent Timoshenko beam models which have different mass distributions. The probability density functions of the maximal internal forces and moments in a given building are estimated by Monte Carlo simulations. Some results on the stochastic modal analysis of the structure are also given.

  16. Non-parametric frequency analysis of extreme values for integrated disaster management considering probable maximum events

    NASA Astrophysics Data System (ADS)

    Takara, K. T.

    2015-12-01

    This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.

  17. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    NASA Astrophysics Data System (ADS)

    González, Adriana; Delouille, Véronique; Jacques, Laurent

    2016-01-01

    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  18. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods - A comparison

    NASA Astrophysics Data System (ADS)

    Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José

    2015-10-01

    Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast

  19. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  20. Survival probabilities with time-dependent treatment indicator: quantities and non-parametric estimators.

    PubMed

    Bernasconi, Davide Paolo; Rebora, Paola; Iacobelli, Simona; Valsecchi, Maria Grazia; Antolini, Laura

    2016-03-30

    The 'landmark' and 'Simon and Makuch' non-parametric estimators of the survival function are commonly used to contrast the survival experience of time-dependent treatment groups in applications such as stem cell transplant versus chemotherapy in leukemia. However, the theoretical survival functions corresponding to the second approach were not clearly defined in the literature, and the use of the 'Simon and Makuch' estimator was criticized in the biostatistical community. Here, we review the 'landmark' approach, showing that it focuses on the average survival of patients conditional on being failure free and on the treatment status assessed at the landmark time. We argue that the 'Simon and Makuch' approach represents counterfactual survival probabilities where treatment status is forced to be fixed: the patient is thought as under chemotherapy without possibility to switch treatment or as under transplant since the beginning of the follow-up. We argue that the 'Simon and Makuch' estimator leads to valid estimates only under the Markov assumption, which is however less likely to occur in practical applications. This motivates the development of a novel approach based on time rescaling, which leads to suitable estimates of the counterfactual probabilities in a semi-Markov process. The method is also extended to deal with a fixed landmark time of interest. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26503800

  1. Non-parametric 3D map of the intergalactic medium using the Lyman-alpha forest

    NASA Astrophysics Data System (ADS)

    Cisewski, Jessi; Croft, Rupert A. C.; Freeman, Peter E.; Genovese, Christopher R.; Khandai, Nishikanta; Ozbek, Melih; Wasserman, Larry

    2014-05-01

    Visualizing the high-redshift Universe is difficult due to the dearth of available data; however, the Lyman-alpha forest provides a means to map the intergalactic medium at redshifts not accessible to large galaxy surveys. Large-scale structure surveys, such as the Baryon Oscillation Spectroscopic Survey (BOSS), have collected quasar (QSO) spectra that enable the reconstruction of H I density fluctuations. The data fall on a collection of lines defined by the lines of sight (LOS) of the QSO, and a major issue with producing a 3D reconstruction is determining how to model the regions between the LOS. We present a method that produces a 3D map of this relatively uncharted portion of the Universe by employing local polynomial smoothing, a non-parametric methodology. The performance of the method is analysed on simulated data that mimics the varying number of LOS expected in real data, and then is applied to a sample region selected from BOSS. Evaluation of the reconstruction is assessed by considering various features of the predicted 3D maps including visual comparison of slices, probability density functions (PDFs), counts of local minima and maxima, and standardized correlation functions. This 3D reconstruction allows for an initial investigation of the topology of this portion of the Universe using persistent homology.

  2. Non-parametric inferences on climate change of high-resolution spatial patterns of precipitation extremes in Iberia

    NASA Astrophysics Data System (ADS)

    Melo-Gonçalves, Paulo; Rocha, Alfredo; Pinto, Joaquim; Santos, João; Corte-Real, João

    2013-04-01

    Precipitation daily-total data, obtained a multi-model ensemble of Regional Climate Model (RCM) simulations provided by the EU FP6 Integrated Project ENSEMBLES, is analysed at a horizontal spatial resolution of 25 km in the Iberian Peninsula (IP). ENSEMBLES' RCMs were driven by boundary conditions imposed by General Circulation Models (GCMs) that ran under historic conditions from 1961 to 2000, and under the SRES A1B scenario from 2001 to 2100. Annual and seasonal indices of precipitation extremes, proposed by the CCI/CLIVAR/JCOMM Expert Team on Climate Change Detection and Indices (ETCCDI), were derived from the daily precipitation ensemble. The ensemble of ETCCDI indices is subjected to climate detection methods in order to identify Iberian regions projected to experience higher climate change. Non-parametric climate change detection methods are applied to each member of the ETCCDI multi-model ensemble (ETCCDI-MME) and to and to its median (ETCCDI-MMEM). The resulting statistics are used to infer climate change projections and associated uncertainties. Climate change projections are evaluated from the statistics obtained from the ETCCDI-MMEM, while the uncertainties of those projections are evaluated by a rank-based measure of the spread of these statistics across the ETCCDI-MME. All methods consist of an estimator whose realization, or estimate, is tested by a non-parametric hypothesis test: (i) Theil-Sen linear trend, from 1961 to 2100, tested by the Mann-Kendall test; (ii) differences between the climatologies, estimated by the time median, of a near-future (2021-2050) and a distant-future (2071-2100) climates from the climatology of a recent-past reference climate (1961-1990), tested by the Mann-Whiteney test; and (iii) difference between the Probability Distributions of the near and distant climates from that of the reference climate, tested by the Kolmogorov-Smirnov test. IP regions with statistically significant, at 0.05 level, projected climate change

  3. The Dark Matter Profile of the Milky Way: A Non-parametric Reconstruction

    NASA Astrophysics Data System (ADS)

    Pato, Miguel; Iocco, Fabio

    2015-04-01

    We present the results of a new, non-parametric method to reconstruct the Galactic dark matter profile directly from observations. Using the latest kinematic data to track the total gravitational potential and the observed distribution of stars and gas to set the baryonic component, we infer the dark matter contribution to the circular velocity across the Galaxy. The radial derivative of this dynamical contribution is then estimated to extract the dark matter profile. The innovative feature of our approach is that it makes no assumption on the functional form or shape of the profile, thus allowing for a clean determination with no theoretical bias. We illustrate the power of the method by constraining the spherical dark matter profile between 2.5 and 25 kpc away from the Galactic center. The results show that the proposed method, free of widely used assumptions, can already be applied to pinpoint the dark matter distribution in the Milky Way with competitive accuracy, and paves the way for future developments.

  4. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  5. The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches

    NASA Astrophysics Data System (ADS)

    Bucher, Martin; Racine, Benjamin; van Tent, Bartjan

    2016-05-01

    We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called fNL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in order to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.

  6. Bayesian non-parametric approaches to reconstructing oscillatory systems and the Nyquist limit

    NASA Astrophysics Data System (ADS)

    Žurauskienė, Justina; Kirk, Paul; Thorne, Thomas; Stumpf, Michael P. H.

    Reconstructing continuous signals from discrete time-points is a challenging inverse problem encountered in many scientific and engineering applications. For oscillatory signals classical results due to Nyquist set the limit below which it becomes impossible to reliably reconstruct the oscillation dynamics. Here we revisit this problem for vector-valued outputs and apply Bayesian non-parametric approaches in order to solve the function estimation problem. The main aim of the current paper is to map how we can use of correlations among different outputs to reconstruct signals at a sampling rate that lies below the Nyquist rate. We show that it is possible to use multiple-output Gaussian processes to capture dependences between outputs which facilitate reconstruction of signals in situation where conventional Gaussian processes (i.e. this aimed at describing scalar signals) fail, and we delineate the phase and frequency dependence of the reliability of this type of approach. In addition to simple toy-models we also consider the dynamics of the tumour suppressor gene p53, which exhibits oscillations under physiological conditions, and which can be reconstructed more reliably in our new framework.

  7. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    PubMed

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. PMID:25524862

  8. Non-parametric seismic hazard analysis in the presence of incomplete data

    NASA Astrophysics Data System (ADS)

    Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush

    2016-07-01

    The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.

  9. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  10. Real time air quality forecasting using integrated parametric and non-parametric regression techniques

    NASA Astrophysics Data System (ADS)

    Donnelly, Aoife; Misstear, Bruce; Broderick, Brian

    2015-02-01

    This paper presents a model for producing real time air quality forecasts with both high accuracy and high computational efficiency. Temporal variations in nitrogen dioxide (NO2) levels and historical correlations between meteorology and NO2 levels are used to estimate air quality 48 h in advance. Non-parametric kernel regression is used to produce linearized factors describing variations in concentrations with wind speed and direction and, furthermore, to produce seasonal and diurnal factors. The basis for the model is a multiple linear regression which uses these factors together with meteorological parameters and persistence as predictors. The model was calibrated at three urban sites and one rural site and the final fitted model achieved R values of between 0.62 and 0.79 for hourly forecasts and between 0.67 and 0.84 for daily maximum forecasts. Model validation using four model evaluation parameters, an index of agreement (IA), the correlation coefficient (R), the fraction of values within a factor of 2 (FAC2) and the fractional bias (FB), yielded good results. The IA for 24 hr forecasts of hourly NO2 was between 0.77 and 0.90 at urban sites and 0.74 at the rural site, while for daily maximum forecasts it was between 0.89 and 0.94 for urban sites and 0.78 for the rural site. R values of up to 0.79 and 0.81 and FAC2 values of 0.84 and 0.96 were observed for hourly and daily maximum predictions, respectively. The model requires only simple input data and very low computational resources. It found to be an accurate and efficient means of producing real time air quality forecasts.

  11. A general non-parametric classifier applied to discriminating surface water from terrain shadows

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.

    1975-01-01

    A general non-parametric classifier is described in the context of discriminating surface water from terrain shadows. In addition to using non-parametric statistics, this classifier permits the use of a cost matrix to assign different penalties to various types of misclassifications. The approach also differs from conventional classifiers in that it applies the maximum-likelihood criterion to overall class probabilities as opposed to the standard practice of choosing the most likely individual subclass. The classifier performance is evaluated using two different effectiveness measures for a specific set of ERTS data.

  12. Program Computes Thermodynamic Functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1994-01-01

    PAC91 is latest in PAC (Properties and Coefficients) series. Two principal features are to provide means of (1) generating theoretical thermodynamic functions from molecular constants and (2) least-squares fitting of these functions to empirical equations. PAC91 written in FORTRAN 77 to be machine-independent.

  13. Symbolic functions from neural computation.

    PubMed

    Smolensky, Paul

    2012-07-28

    Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance. PMID:22711873

  14. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width. PMID:25785608

  15. Performances and Spending Efficiency in Higher Education: A European Comparison through Non-Parametric Approaches

    ERIC Educational Resources Information Center

    Agasisti, Tommaso

    2011-01-01

    The objective of this paper is an efficiency analysis concerning higher education systems in European countries. Data have been extracted from OECD data-sets (Education at a Glance, several years), using a non-parametric technique--data envelopment analysis--to calculate efficiency scores. This paper represents the first attempt to conduct such an…

  16. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  17. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  18. A non-parametric approach for co-analysis of multi-modal brain imaging data: Application to Alzheimer’s disease

    PubMed Central

    Hayasaka, Satoru; Du, An-Tao; Duarte, Audrey; Kornak, John; Jahng, Geon-Ho; Weiner, Michael W.; Schuff, Norbert

    2007-01-01

    We developed a new flexible approach for a co-analysis of multimodal brain imaging data using a non-parametric framework. In this approach, results from separate analyses on different modalities are combined using a combining function and assessed with a permutation test. This approach identifies several cross-modality relationships, such as concordance and dissociation, without explicitly modeling the correlation between modalities. We applied our approach to structural and perfusion MRI data from an Alzheimer’s disease (AD) study. Our approach identified areas of concordance, where both gray matter (GM) density and perfusion decreased together, and areas of dissociation, where GM density and perfusion did not decrease together. In conclusion, these results demonstrate the utility of this new non-parametric method to quantitatively assess the relationships between multiple modalities. PMID:16412666

  19. Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation

    NASA Astrophysics Data System (ADS)

    Pentaris, Fragkiskos P.; Fouskitakis, George N.

    2014-05-01

    The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5

  20. Automatic computation of transfer functions

    DOEpatents

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  1. Software to use the non-parametric k-nearest neighbor approach to estimate soil water retention

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-parametric approaches are being used in various fields to address classification type problems, as well as to estimate continuous variables. One type of the non-parametric lazy learning algorithms, a k-Nearest Neighbor (k-NN) algorithm has been applied as a pedotransfer technique to estimate wat...

  2. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  3. Non-Parametric Bayesian Human Motion Recognition Using a Single MEMS Tri-Axial Accelerometer

    PubMed Central

    Ahmed, M. Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  4. Non-parametric Bayesian human motion recognition using a single MEMS tri-axial accelerometer.

    PubMed

    Ahmed, M Ejaz; Song, Ju Bin

    2012-01-01

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method. PMID:23201992

  5. Non-parametric determination of H and He interstellar fluxes from cosmic-ray data

    NASA Astrophysics Data System (ADS)

    Ghelfi, A.; Barao, F.; Derome, L.; Maurin, D.

    2016-06-01

    Context. Top-of-atmosphere (TOA) cosmic-ray (CR) fluxes from satellites and balloon-borne experiments are snapshots of the solar activity imprinted on the interstellar (IS) fluxes. Given a series of snapshots, the unknown IS flux shape and the level of modulation (for each snapshot) can be recovered. Aims: We wish (i) to provide the most accurate determination of the IS H and He fluxes from TOA data alone; (ii) to obtain the associated modulation levels (and uncertainties) while fully accounting for the correlations with the IS flux uncertainties; and (iii) to inspect whether the minimal force-field approximation is sufficient to explain all the data at hand. Methods: Using H and He TOA measurements, including the recent high-precision AMS, BESS-Polar, and PAMELA data, we performed a non-parametric fit of the IS fluxes JISH,~He and modulation level φi for each data-taking period. We relied on a Markov chain Monte Carlo (MCMC) engine to extract the probability density function and correlations (hence the credible intervals) of the sought parameters. Results: Although H and He are the most abundant and best measured CR species, several datasets had to be excluded from the analysis because of inconsistencies with other measurements. From the subset of data passing our consistency cut, we provide ready-to-use best-fit and credible intervals for the H and He IS fluxes from MeV/n to PeV/n energy (with a relative precision in the range [ 2-10% ] at 1σ). Given the strong correlation between JIS and φi parameters, the uncertainties on JIS translate into Δφ ≈ ± 30 MV (at 1σ) for all experiments. We also find that the presence of 3He in He data biases φ towards higher φ values by ~30 MV. The force-field approximation, despite its limitation, gives an excellent (χ2/d.o.f. = 1.02) description of the recent high-precision TOA H and He fluxes. Conclusions: The analysis must be extended to different charge species and more realistic modulation models. It would benefit

  6. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  7. Non-Parametric Change-Point Method for Differential Gene Expression Detection

    PubMed Central

    Wang, Yao; Wu, Chunguo; Ji, Zhaohua; Wang, Binghong; Liang, Yanchun

    2011-01-01

    Background We proposed a non-parametric method, named Non-Parametric Change Point Statistic (NPCPS for short), by using a single equation for detecting differential gene expression (DGE) in microarray data. NPCPS is based on the change point theory to provide effective DGE detecting ability. Methodology NPCPS used the data distribution of the normal samples as input, and detects DGE in the cancer samples by locating the change point of gene expression profile. An estimate of the change point position generated by NPCPS enables the identification of the samples containing DGE. Monte Carlo simulation and ROC study were applied to examine the detecting accuracy of NPCPS, and the experiment on real microarray data of breast cancer was carried out to compare NPCPS with other methods. Conclusions Simulation study indicated that NPCPS was more effective for detecting DGE in cancer subset compared with five parametric methods and one non-parametric method. When there were more than 8 cancer samples containing DGE, the type I error of NPCPS was below 0.01. Experiment results showed both good accuracy and reliability of NPCPS. Out of the 30 top genes ranked by using NPCPS, 16 genes were reported as relevant to cancer. Correlations between the detecting result of NPCPS and the compared methods were less than 0.05, while between the other methods the values were from 0.20 to 0.84. This indicates that NPCPS is working on different features and thus provides DGE identification from a distinct perspective comparing with the other mean or median based methods. PMID:21655325

  8. Non-parametric trend analysis of water quality data of rivers in Kansas

    NASA Astrophysics Data System (ADS)

    Yu, Yun-Sheng; Zou, Shimin; Whittemore, Donald

    1993-09-01

    Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous.

  9. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test☆

    PubMed Central

    Kerschbamer, Rudolf

    2015-01-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571

  10. Factors associated with malnutrition among tribal children in India: a non-parametric approach.

    PubMed

    Debnath, Avijit; Bhattacharjee, Nairita

    2014-06-01

    The purpose of this study is to identify the determinants of malnutrition among the tribal children in India. The investigation is based on secondary data compiled from the National Family Health Survey-3. We used a classification and regression tree model, a non-parametric approach, to address the objective. Our analysis shows that breastfeeding practice, economic status, antenatal care of mother and women's decision-making autonomy are negatively associated with malnutrition among tribal children. We identify maternal malnutrition and urban concentration of household as the two risk factors for child malnutrition. The identified associated factors may be used for designing and targeting preventive programmes for malnourished tribal children. PMID:24415743

  11. Computational complexity of Boolean functions

    NASA Astrophysics Data System (ADS)

    Korshunov, Aleksei D.

    2012-02-01

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  12. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    SciTech Connect

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-02-15

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 {<=} r {<=} 700 pc. The profile for r {>=} 20 pc is well fit by a power law with slope {alpha} = -1.0 {+-} 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  13. hiHMM: Bayesian non-parametric joint inference of chromatin state maps

    PubMed Central

    Sohn, Kyung-Ah; Ho, Joshua W. K.; Djordjevic, Djordje; Jeong, Hyun-hwan; Park, Peter J.; Kim, Ju Han

    2015-01-01

    Motivation: Genome-wide mapping of chromatin states is essential for defining regulatory elements and inferring their activities in eukaryotic genomes. A number of hidden Markov model (HMM)-based methods have been developed to infer chromatin state maps from genome-wide histone modification data for an individual genome. To perform a principled comparison of evolutionarily distant epigenomes, we must consider species-specific biases such as differences in genome size, strength of signal enrichment and co-occurrence patterns of histone modifications. Results: Here, we present a new Bayesian non-parametric method called hierarchically linked infinite HMM (hiHMM) to jointly infer chromatin state maps in multiple genomes (different species, cell types and developmental stages) using genome-wide histone modification data. This flexible framework provides a new way to learn a consistent definition of chromatin states across multiple genomes, thus facilitating a direct comparison among them. We demonstrate the utility of this method using synthetic data as well as multiple modENCODE ChIP-seq datasets. Conclusion: The hierarchical and Bayesian non-parametric formulation in our approach is an important extension to the current set of methodologies for comparative chromatin landscape analysis. Availability and implementation: Source codes are available at https://github.com/kasohn/hiHMM. Chromatin data are available at http://encode-x.med.harvard.edu/data_sets/chromatin/. Contact: peter_park@harvard.edu or juhan@snu.ac.kr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25725496

  14. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  15. Metacognition: computation, biology and function

    PubMed Central

    Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.

    2012-01-01

    Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746

  16. Metacognition: computation, biology and function.

    PubMed

    Fleming, Stephen M; Dolan, Raymond J; Frith, Christopher D

    2012-05-19

    Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746

  17. Non-parametric reconstruction of an inflaton potential from Einstein-Cartan-Sciama-Kibble gravity with particle production

    NASA Astrophysics Data System (ADS)

    Desai, Shantanu; Popławski, Nikodem J.

    2016-04-01

    The coupling between spin and torsion in the Einstein-Cartan-Sciama-Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ∼10-42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.

  18. Non-parametric reconstruction of an inflaton potential from Einstein-Cartan-Sciama-Kibble gravity with particle production

    NASA Astrophysics Data System (ADS)

    Desai, Shantanu; Popławski, Nikodem J.

    2016-04-01

    The coupling between spin and torsion in the Einstein-Cartan-Sciama-Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ˜10-42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.

  19. Comparisons of parametric and non-parametric classification rules for e-nose and e-tongue

    NASA Astrophysics Data System (ADS)

    Mahat, Nor Idayu; Zakaria, Ammar; Shakaff, Ali Yeon Md

    2015-12-01

    This paper evaluates the performance of parametric and non-parametric classification rules in sensor technology. The growing of sensor technologies, e-nose and e-tongue, has urged engineers to equip themselves with the utmost recent and advanced statistical approaches. As data collected from e-nose and e-tongue face some complexities, often data pre-processing and transformation are performed prior to the classification. This paper discusses the comparisons made on some known parametric and non-parametric classification rules in the application for classifying data of e-nose and e-tongue. The comparisons which based on leave-one-out accuracy, sensitivity and specificity shows that non-parametric approaches especially k-nearest neighbour does not much distorted with changes of distribution, but Naïve Bayes is greatly influenced by the structure of the data.

  20. Super-resolution image reconstruction using non-parametric Bayesian INLA approximation.

    PubMed

    Camponez, Marcelo Oliveira; Evandro, Ottoni Teatini Salles; Sarcinelli-Filho, Mário

    2012-08-01

    Superresolution are techniques to enhance the resolution of an image without changing the camera resolution, through using software algorithms. In this context, this paper proposes a fully automatic Superresolution algorithm, using a recent non-parametric Bayesian inference method based on numerical integration, known in the statistical literature as Integrated Nested Laplace Approximation. By applying such inference method to the Superresolution problem, this paper shows that all the equations needed to implement this technique can be written in closed form. Moreover, the results of several simulations (three of them are here presented) show that the proposed algorithm performs better than other Superresolution algorithms recently proposed. As far as the authors know, this is the first time that the Integrated Nested Laplace Approximation is used in the area of image processing, which is a meaningful contribution of this paper. PMID:22562764

  1. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent

    PubMed Central

    Browning, Sharon R.; Browning, Brian L.

    2015-01-01

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365

  2. Developing two non-parametric performance models for higher learning institutions

    NASA Astrophysics Data System (ADS)

    Kasim, Maznah Mat; Kashim, Rosmaini; Rahim, Rahela Abdul; Khan, Sahubar Ali Muhamed Nadhar

    2016-08-01

    Measuring the performance of higher learning Institutions (HLIs) is a must for these institutions to improve their excellence. This paper focuses on formation of two performance models: efficiency and effectiveness models by utilizing a non-parametric method, Data Envelopment Analysis (DEA). The proposed models are validated by measuring the performance of 16 public universities in Malaysia for year 2008. However, since data for one of the variables is unavailable, an estimate was used as a proxy to represent the real data. The results show that average efficiency and effectiveness scores were 0.817 and 0.900 respectively, while six universities were fully efficient and eight universities were fully effective. A total of six universities were both efficient and effective. It is suggested that the two proposed performance models would work as complementary methods to the existing performance appraisal method or as alternative methods in monitoring the performance of HLIs especially in Malaysia.

  3. Assessing T cell clonal size distribution: a non-parametric approach.

    PubMed

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity. PMID:25275470

  4. Non-parametric analysis of LANDSAT maps using neural nets and parallel computers

    NASA Technical Reports Server (NTRS)

    Salu, Yehuda; Tilton, James

    1991-01-01

    Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.

  5. Piezoelectric sensing and non-parametric statistical signal processing for health monitoring of hysteretic dampers used in seismic-resistant structures

    NASA Astrophysics Data System (ADS)

    Gallego, A.; Benavent-Climent, A.; Romo-Melo, L.

    2015-08-01

    The paper proposes a new application of non-parametric statistical processing of signals recorded from vibration tests for damage detection and evaluation on I-section steel segments. The steel segments investigated constitute the energy dissipating part of a new type of hysteretic damper that is used for passive control of buildings and civil engineering structures subjected to earthquake-type dynamic loadings. Two I-section steel segments with different levels of damage were instrumented with piezoceramic sensors and subjected to controlled white noise random vibrations. The signals recorded during the tests were processed using two non-parametric methods (the power spectral density method and the frequency response function method) that had never previously been applied to hysteretic dampers. The appropriateness of these methods for quantifying the level of damage on the I-shape steel segments is validated experimentally. Based on the results of the random vibrations, the paper proposes a new index that predicts the level of damage and the proximity of failure of the hysteretic damper.

  6. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  7. Decision fusion and non-parametric classifiers for land use mapping using multi-temporal RapidEye data

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Conrad, Christopher; Michel, Ulrich

    2015-10-01

    This study addressed the classification of multi-temporal satellite data from RapidEye by considering different classifier algorithms and decision fusion. Four non-parametric classifier algorithms, decision tree (DT), random forest (RF), support vector machine (SVM), and multilayer perceptron (MLP), were applied to map crop types in various irrigated landscapes in Central Asia. A novel decision fusion strategy to combine the outputs of the classifiers was proposed. This approach is based on randomly selecting subsets of the input dataset and aggregating the probabilistic outputs of the base classifiers with another meta-classifier. During the decision fusion, the reliability of each base classifier algorithm was considered to exclude less reliable inputs at the class-basis. The spatial and temporal transferability of the classifiers was evaluated using data sets from four different agricultural landscapes with different spatial extents and from different years. A detailed accuracy assessment showed that none of the stand-alone classifiers was the single best performing. Despite the very good performance of the base classifiers, there was still up to 50% disagreement in the maps produced by the two single best classifiers, RF and SVM. The proposed fusion strategy, however, increased overall accuracies up to 6%. In addition, it was less sensitive to reduced training set sizes and produced more realistic land use maps with less speckle. The proposed fusion approach was better transferable to data sets from other years, i.e. resulted in higher accuracies for the investigated classes. The fusion approach is computationally efficient and appears well suited for mapping diverse crop categories based on sensors with a similar high repetition rate and spatial resolution like RapidEye, for instance the upcoming Sentinel-2 mission.

  8. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  9. Non-parametric photic entrainment of Djungarian hamsters with different rhythmic phenotypes.

    PubMed

    Schöttner, Konrad; Hauer, Jane; Weinert, Dietmar

    2016-01-01

    To investigate the role of non-parametric light effects in entrainment, Djungarian hamsters of two different circadian phenotypes were exposed to skeleton photoperiods, or to light pulses at different circadian times, to compile phase response curves (PRCs). Wild-type (WT) hamsters show daily rhythms of locomotor activity in accord with the ambient light/dark conditions, with activity onset and offset strongly coupled to light-off and light-on, respectively. Hamsters of the delayed activity onset (DAO) phenotype, in contrast, progressively delay their activity onset, whereas activity offset remains coupled to light-on. The present study was performed to better understand the underlying mechanisms of this phenomenon. Hamsters of DAO and WT phenotypes were kept first under standard housing conditions with a 14:10 h light-dark cycle, and then exposed to skeleton photoperiods (one or two 15-min light pulses of 100 lx at the times of the former light-dark and/or dark-light transitions). In a second experiment, hamsters of both phenotypes were transferred to constant darkness and allowed to free-run until the lengths of the active (α) and resting (ρ) periods were equal (α:ρ = 1). At this point, animals were then exposed to light pulses (100 lx, 15 min) at different circadian times (CTs). Phase and period changes were estimated separately for activity onset and offset. When exposed to skeleton-photoperiods with one or two light pulses, the daily activity patterns of DAO and WT hamsters were similar to those obtained under conditions of a complete 14:10 h light-dark cycle. However, in the case of giving only one light pulse at the time of the former light-dark transition, animals temporarily free-ran until activity offset coincided with the light pulse. These results show that photic entrainment of the circadian activity rhythm is attained primarily via non-parametric mechanisms, with the "morning" light pulse being the essential cue. In the second experiment, typical

  10. Two non-parametric methods for derivation of constraints from radiotherapy dose-histogram data

    NASA Astrophysics Data System (ADS)

    Ebert, M. A.; Gulliford, S. L.; Buettner, F.; Foo, K.; Haworth, A.; Kennedy, A.; Joseph, D. J.; Denham, J. W.

    2014-07-01

    Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose-histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization.

  11. Catchment compatibility via copulas: A non-parametric study of the dependence structures of hydrological responses

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Salvadori, G.; De Michele, C.

    2016-04-01

    The similarity of catchment responses is a fundamental issue for regionalization studies, and hydrograph attributes (i.e., Discharge Peak, Volume, and Duration) can reveal the signature and the synthesis of local scale processes. Here, we focus the attention on the "compatibility" between catchments, viz. on the possibility to transfer, from one catchment to another, the information about the dependence structures at play. In particular, we statistically investigate the possible relationships between the features of different Basin Scenarios (characterized via the Concentration Time Tc and the Curve Number CN) and the corresponding dependence structures ruling the joint statistics of Discharge, Volume, and Duration. Given a large set of synthetic runoff time series, generated via a rainfall-runoff model, recent non-parametric tests, based on empirical copulas, are used to compare the dependence structures associated with different soil uses and concentration times. The results indicate how the hydrological properties may affect the dependence structure. The outcomes of the investigation could be particularly effective in two practical applications: (1) for determining the degree of compatibility of the dependence structures associated with different basin scenarios, and (2) for enriching scanty data bases, in order to improve the estimation of multivariate copulas.

  12. Contingency severity assessment for voltage security using non-parametric regression techniques

    SciTech Connect

    Wehenkel, L.

    1996-02-01

    This paper proposes a novel approach to voltage security assessment exploiting non-parametric regression techniques to extract simple and at the same time reliable models of the severity of a contingency, defined as the difference between pre- and post-contingency load power margins. The regression techniques extract information from large sets of possible operating conditions of a power system screened off-line via massive random sampling, whose voltage security with respect to contingencies is pre-analyzed using an efficient voltage stability simulation. In particular, regression trees are used to identify the most salient parameters of the pre-contingency topology and electrical state which influence the severity of a given contingency, and to provide a first guess transparent approximation of the contingency severity in terms of these latter parameters. Multi-layer perceptrons are exploited to further refine this information. The approach is demonstrated on a realistic model of a large scale voltage stability limited system, where it shows to provide valuable physical insight and reliable contingency evaluation. Various potential uses in power system planning and operation are discussed.

  13. Water quality analysis in rivers with non-parametric probability distributions and fuzzy inference systems: application to the Cauca River, Colombia.

    PubMed

    Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L

    2013-02-01

    The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. PMID:23266912

  14. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    PubMed Central

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  15. Parametric vs. non-parametric daily weather generator: validation and comparison

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin

    2016-04-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  16. Non-parametric bootstrapping method for measuring the temporal discrimination threshold for movement disorders

    NASA Astrophysics Data System (ADS)

    Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.

    2015-08-01

    Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.

  17. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods

    USGS Publications Warehouse

    Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  18. Revisiting the Distance Duality Relation using a non-parametric regression method

    NASA Astrophysics Data System (ADS)

    Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha

    2016-07-01

    The interdependence of luminosity distance, DL and angular diameter distance, DA given by the distance duality relation (DDR) is very significant in observational cosmology. It is very closely tied with the temperature-redshift relation of Cosmic Microwave Background (CMB) radiation. Any deviation from η(z)≡ DL/DA (1+z)2 =1 indicates a possible emergence of new physics. Our aim in this work is to check the consistency of these relations using a non-parametric regression method namely, LOESS with SIMEX. This technique avoids dependency on the cosmological model and works with a minimal set of assumptions. Further, to analyze the efficiency of the methodology, we simulate a dataset of 020 points of η (z) data based on a phenomenological model η(z)= (1+z)epsilon. The error on the simulated data points is obtained by using the temperature of CMB radiation at various redshifts. For testing the distance duality relation, we use the JLA SNe Ia data for luminosity distances, while the angular diameter distances are obtained from radio galaxies datasets. Since the DDR is linked with CMB temperature-redshift relation, therefore we also use the CMB temperature data to reconstruct η (z). It is important to note that with CMB data, we are able to study the evolution of DDR upto a very high redshift z = 2.418. In this analysis, we find no evidence of deviation from η=1 within a 1σ region in the entire redshift range used in this analysis (0 < z <= 2.418).

  19. Parametric and non-parametric species delimitation methods result in the recognition of two new Neotropical woody bamboo species.

    PubMed

    Ruiz-Sanchez, Eduardo

    2015-12-01

    The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata. PMID:26265258

  20. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    PubMed

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems. PMID:25953609

  1. On computation of Hough functions

    NASA Astrophysics Data System (ADS)

    Wang, Houjun; Boyd, John P.; Akmaev, Rashid A.

    2016-04-01

    Hough functions are the eigenfunctions of the Laplace tidal equation governing fluid motion on a rotating sphere with a resting basic state. Several numerical methods have been used in the past. In this paper, we compare two of those methods: normalized associated Legendre polynomial expansion and Chebyshev collocation. Both methods are not widely used, but both have some advantages over the commonly used unnormalized associated Legendre polynomial expansion method. Comparable results are obtained using both methods. For the first method we note some details on numerical implementation. The Chebyshev collocation method was first used for the Laplace tidal problem by Boyd (1976) and is relatively easy to use. A compact MATLAB code is provided for this method. We also illustrate the importance and effect of including a parity factor in Chebyshev polynomial expansions for modes with odd zonal wave numbers.

  2. [Non-Parametric Analysis of Radiation Risks of Mortality among Chernobyl Clean-Up Workers].

    PubMed

    Gorsky, A I; Maksioutov, M A; Tumanov, K A; Shchukina, N V; Chekin, S Yu; Ivanov, V K

    2016-01-01

    Analysis of the relationship between dose and mortality from cancer and circulation diseases in the cohort of Chernobyl clean-up workers based on the data from the National Radiation and Epidemiological Registry was performed. Medical and dosimetry information on the clean-up workers, males, who got radiation doses from April 26, 1986 to April 26, 1987, which was accumulated from 1992 to 2012, was used for the analysis. The total size of the cohort was 42929 people, 12731 deaths were registered in the cohort, among them 1893 deaths from solid cancers and 5230 deaths were from circulation diseases. An average age of the workers was 39 years in 1992 and the mean dose was 164 mGy. The dose-effect relationship was estimated with the use of non-parametric analysis of survival with regard to concurrence of risks of mortality. The risks were estimated in 6 dose groups of similar size (1-70, 70-130, 130-190, 190-210, 210-230 and.230-1000 mGy). The group "1-70 mGy" was used as control. Estimated dose-effect relationship related to cancers and circulation diseases is described approximately with a linear model, coefficient of determination (the proportion of variability explained by the linear model) for cancers was 23-25% and for circulation diseases - 2-13%. The slope coefficient of the dose-effect relationship normalized to 1 Gy for the ratio of risks for cancers in the linear model was 0.47 (95% CI: -0.77, 1.71), and for circulation diseases it was 0.22 (95% CI: -0.58, 1.02). Risks coefficient (slope coefficient of excess mortality at a dose of 1 Gy) for solid cancers was 1.94 (95% CI: - 3.10, 7.00) x 10(-2) and for circulation diseases it was 0.67 (95% CI: -9.61, 11.00) x 10(-2). 137 deaths from radiation-induced cancers and 47 deaths from circulation diseases were registered during a follow up period. PMID:27534064

  3. Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.

    PubMed

    Singh, Sanjeet

    2016-08-01

    Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance

  4. Validation of two (parametric vs non-parametric) daily weather generators

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Skalak, P.

    2015-12-01

    As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

  5. The Impact of Changing Snowmelt Timing on Non-Irrigated Crop Yield: A Parametric and Non-Parametric Approach

    NASA Astrophysics Data System (ADS)

    Murray, E. M.; Cobourn, K.; Flores, A. N.; Pierce, J. L.

    2014-12-01

    As climate changes, the final date of spring snowmelt is projected to occur earlier in the year within the western United States. This earlier snowmelt timing may impact crop yield in snow-dominated watersheds by changing the timing of water delivery to agricultural fields. There is considerable uncertainty about how agricultural impacts of snowmelt timing may vary by region, crop-type, and practices like irrigation vs. dryland farming. Establishing the relationship between snowmelt timing and agricultural yield is important for understanding how changes in large-scale climatic indices (like snowmelt date) may be associated with changes in agricultural yield. A better understanding of the influence of changes in snowmelt on non-irrigated crop yield may additionally be extrapolated to better understand how climate change may alter biomass production in non-managed ecosystems. We utilized parametric regression techniques to isolate the magnitude of impact snowmelt timing has had on historical crop yield independently of climate and spatial variables that also impact yield. To do this, we examined the historical relationship between snowmelt timing and non-irrigated wheat and barley yield using a multiple linear regression model to predict yield in several Idaho counties as a function of snowmelt date, climate variables (precipitation and growing degree-days), and spatial differences between counties. We utilized non-parametric techniques to determine where snowmelt timing has positively versus negatively impacted yield. To do this, we developed classification and regression trees to identify spatial controls (e.g. latitude, elevation) on the relationship between snowmelt timing and yield. Most trends suggest a decrease in crop yield with earlier snowmelt, but a significant opposite relationship is observed in some regions of Idaho. An earlier snowmelt date occurring at high latitudes corresponds with higher than average wheat yield. Therefore, Northern Idaho may

  6. Non-parametric estimation of seasonal variations in GNSS-derived time series

    NASA Astrophysics Data System (ADS)

    Gruszczynska, Marta; Bogusz, Janusz; Klos, Anna

    2015-04-01

    The seasonal variations in GNSS station's position may arise from geophysical excitations, thermal changes combined together with hydrodynamics or various errors which, when superimposed, cause the seasonal oscillations not exactly of real geodynamical origin, but still have to be included in time series modelling. These variations with different periods included in frequency band from Chandler up to quarter-annual ones will all affect the reliability of permanent station's velocity, which in turn, strictly influences the quality of kinematic reference frames. As shown before by a number of authors, the annual (dominant) sine curve, has the amplitude and phase that both change in time due to the different reasons. In this research we focused on the determination of annual changes in GNSS-derived time series of North, East and Up components. We used here the daily position changes from PPP (Precise Point Positioning) solution obtained by JPL (Jet Propulsion Laboratory) processed in the GIPSY-OASIS software. We analyzed here more than 140 globally distributed IGS stations with the minimum data length of 3 years. The longest time series were even 17 years long (1996-2014). Each of the topocentric time series (North, East and Up) was divided into years (from January to December), then the observations gathered in the same days of year were stacked and the weighted medians obtained for all of them such that each of time series was represented by matrix of size 365xn where n is the data length. In this way we obtained the median annual signal for each of analyzed stations that was then decomposed into different frequency bands using wavelet decomposition with Meyer wavelet. We assumed here 7 levels of decomposition, with annual curve as the last approximation of it. The signal approximations made us to obtain the seasonal peaks that prevail in North, East and Up data for globally distributed stations. The analysis of annual curves, by means of non-parametric estimation

  7. Characterization and modelling of the spatially- and spectrally-varying point-spread function in hyperspectral imaging systems for computational correction of axial optical aberrations

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Spatial resolution of hyperspectral imaging systems can vary significantly due to axial optical aberrations that originate from wavelength-induced index-of-refraction variations of the imaging optics. For systems that have a broad spectral range, the spatial resolution will vary significantly both with respect to the acquisition wavelength and with respect to the spatial position within each spectral image. Variations of the spatial resolution can be effectively characterized as part of the calibration procedure by a local image-based estimation of the pointspread function (PSF) of the hyperspectral imaging system. The estimated PSF can then be used in the image deconvolution methods to improve the spatial resolution of the spectral images. We estimated the PSFs from the spectral images of a line grid geometric caliber. From individual line segments of the line grid, the PSF was obtained by a non-parametric estimation procedure that used an orthogonal series representation of the PSF. By using the non-parametric estimation procedure, the PSFs were estimated at different spatial positions and at different wavelengths. The variations of the spatial resolution were characterized by the radius and the fullwidth half-maximum of each PSF and by the modulation transfer function, computed from images of USAF1951 resolution target. The estimation and characterization of the PSFs and the image deconvolution based spatial resolution enhancement were tested on images obtained by a hyperspectral imaging system with an acousto-optic tunable filter in the visible spectral range. The results demonstrate that the spatial resolution of the acquired spectral images can be significantly improved using the estimated PSFs and image deconvolution methods.

  8. On computing special functions in marine engineering

    NASA Astrophysics Data System (ADS)

    Constantinescu, E.; Bogdan, M.

    2015-11-01

    Important modeling applications in marine engineering conduct us to a special class of solutions for difficult differential equations with variable coefficients. In order to be able to solve and implement such models (in wave theory, in acoustics, in hydrodynamics, in electromagnetic waves, but also in many other engineering fields), it is necessary to compute so called special functions: Bessel functions, modified Bessel functions, spherical Bessel functions, Hankel functions. The aim of this paper is to develop numerical solutions in Matlab for the above mentioned special functions. Taking into account the main properties for Bessel and modified Bessel functions, we shortly present analytically solutions (where possible) in the form of series. Especially it is studied the behavior of these special functions using Matlab facilities: numerical solutions and plotting. Finally, it will be compared the behavior of the special functions and point out other directions for investigating properties of Bessel and spherical Bessel functions. The asymptotic forms of Bessel functions and modified Bessel functions allow determination of important properties of these functions. The modified Bessel functions tend to look more like decaying and growing exponentials.

  9. Computer Games Functioning as Motivation Stimulants

    ERIC Educational Resources Information Center

    Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh

    2011-01-01

    Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…

  10. Deterministic Function Computation with Chemical Reaction Networks*

    PubMed Central

    Chen, Ho-Lin; Doty, David; Soloveichik, David

    2013-01-01

    Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068

  11. The emerging discipline of Computational Functional Anatomy

    PubMed Central

    Miller, Michael I.; Qiu, Anqi

    2010-01-01

    Computational Functional Anatomy (CFA) is the study of functional and physiological response variables in anatomical coordinates. For this we focus on two things: (i) the construction of bijections (via diffeomorphisms) between the coordinatized manifolds of human anatomy, and (ii) the transfer (group action and parallel transport) of functional information into anatomical atlases via these bijections. We review advances in the unification of the bijective comparison of anatomical submanifolds via point-sets including points, curves and surface triangulations as well as dense imagery. We examine the transfer via these bijections of functional response variables into anatomical coordinates via group action on scalars and matrices in DTI as well as parallel transport of metric information across multiple templates which preserves the inner product. PMID:19103297

  12. New Computer Simulations of Macular Neural Functioning

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.

    1994-01-01

    We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.

  13. Efficient computation of Wigner-Eisenbud functions

    NASA Astrophysics Data System (ADS)

    Raffah, Bahaaudin M.; Abbott, Paul C.

    2013-06-01

    The R-matrix method, introduced by Wigner and Eisenbud (1947) [1], has been applied to a broad range of electron transport problems in nanoscale quantum devices. With the rapid increase in the development and modeling of nanodevices, efficient, accurate, and general computation of Wigner-Eisenbud functions is required. This paper presents the Mathematica package WignerEisenbud, which uses the Fourier discrete cosine transform to compute the Wigner-Eisenbud functions in dimensionless units for an arbitrary potential in one dimension, and two dimensions in cylindrical coordinates. Program summaryProgram title: WignerEisenbud Catalogue identifier: AEOU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Distribution format: tar.gz Programming language: Mathematica Operating system: Any platform supporting Mathematica 7.0 and above Keywords: Wigner-Eisenbud functions, discrete cosine transform (DCT), cylindrical nanowires Classification: 7.3, 7.9, 4.6, 5 Nature of problem: Computing the 1D and 2D Wigner-Eisenbud functions for arbitrary potentials using the DCT. Solution method: The R-matrix method is applied to the physical problem. Separation of variables is used for eigenfunction expansion of the 2D Wigner-Eisenbud functions. Eigenfunction computation is performed using the DCT to convert the Schrödinger equation with Neumann boundary conditions to a generalized matrix eigenproblem. Limitations: Restricted to uniform (rectangular grid) sampling of the potential. In 1D the number of sample points, n, results in matrix computations involving n×n matrices. Unusual features: Eigenfunction expansion using the DCT is fast and accurate. Users can specify scattering potentials using functions, or interactively using mouse input. Use of dimensionless units permits application to a

  14. Neutron monitor yield function: New improved computations

    NASA Astrophysics Data System (ADS)

    Mishev, A. L.; Usoskin, I. G.; Kovaltsov, G. A.

    2013-06-01

    A ground-based neutron monitor (NM) is a standard tool to measure cosmic ray (CR) variability near Earth, and it is crucially important to know its yield function for primary CRs. Although there are several earlier theoretically calculated yield functions, none of them agrees with experimental data of latitude surveys of sea-level NMs, thus suggesting for an inconsistency. A newly computed yield function of the standard sea-level 6NM64 NM is presented here separately for primary CR protons and α-particles, the latter representing also heavier species of CRs. The computations have been done using the GEANT-4 PLANETOCOSMICS Monte-Carlo tool and a realistic curved atmospheric model. For the first time, an effect of the geometrical correction of the NM effective area, related to the finite lateral expansion of the CR induced atmospheric cascade, is considered, which was neglected in the previous studies. This correction slightly enhances the relative impact of higher-energy CRs (energy above 5-10 GeV/nucleon) in NM count rate. The new computation finally resolves the long-standing problem of disagreement between the theoretically calculated spatial variability of CRs over the globe and experimental latitude surveys. The newly calculated yield function, corrected for this geometrical factor, appears fully consistent with the experimental latitude surveys of NMs performed during three consecutive solar minima in 1976-1977, 1986-1987, and 1996-1997. Thus, we provide a new yield function of the standard sea-level NM 6NM64 that is validated against experimental data.

  15. Computer network defense through radial wave functions

    NASA Astrophysics Data System (ADS)

    Malloy, Ian J.

    The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.

  16. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  17. Incorporation of Unreliable Information Into Photogrammetric Reconstruction for Recovery of Scale Using Non-Parametric Belief Propagation

    NASA Astrophysics Data System (ADS)

    Hollick, J.; Helmholz, P.; Belton, D.

    2016-06-01

    The creation of large photogrammetric models often encounter several difficulties in regards to geometric accuracy, scale and geolocation, especially when not using control points. Geometric accuracy can be a problem when encountering repetitive features, scale and geolocation can be challenging in GNSS denied or difficult to reach environments. Despite these challenges scale and location are often highly desirable even if only approximate, especially when the error bounds are known. Using non-parametric belief propagation we propose a method of fusing different sensor types to allow robust creation of scaled models without control points. Using this technique we scale models using only the sensor data sometimes to within 4% of their actual size even in the presence of poor GNSS coverage.

  18. ON THE ROBUSTNESS OF z = 0-1 GALAXY SIZE MEASUREMENTS THROUGH MODEL AND NON-PARAMETRIC FITS

    SciTech Connect

    Mosleh, Moein; Franx, Marijn; Williams, Rik J.

    2013-11-10

    We present the size-stellar mass relations of nearby (z = 0.01-0.02) Sloan Digital Sky Survey galaxies, for samples selected by color, morphology, Sérsic index n, and specific star formation rate. Several commonly employed size measurement techniques are used, including single Sérsic fits, two-component Sérsic models, and a non-parametric method. Through simple simulations, we show that the non-parametric and two-component Sérsic methods provide the most robust effective radius measurements, while those based on single Sérsic profiles are often overestimates, especially for massive red/early-type galaxies. Using our robust sizes, we show for all sub-samples that the mass-size relations are shallow at low stellar masses and steepen above ∼3-4 × 10{sup 10} M{sub ☉}. The mass-size relations for galaxies classified as late-type, low-n, and star-forming are consistent with each other, while blue galaxies follow a somewhat steeper relation. The mass-size relations of early-type, high-n, red, and quiescent galaxies all agree with each other but are somewhat steeper at the high-mass end than previous results. To test potential systematics at high redshift, we artificially redshifted our sample (including surface brightness dimming and degraded resolution) to z = 1 and re-fit the galaxies using single Sérsic profiles. The sizes of these galaxies before and after redshifting are consistent and we conclude that systematic effects in sizes and the size-mass relation at z ∼ 1 are negligible. Interestingly, since the poorer physical resolution at high redshift washes out bright galaxy substructures, single Sérsic fitting appears to provide more reliable and unbiased effective radius measurements at high z than for nearby, well-resolved galaxies.

  19. Computational functions in biochemical reaction networks.

    PubMed Central

    Arkin, A; Ross, J

    1994-01-01

    In prior work we demonstrated the implementation of logic gates, sequential computers (universal Turing machines), and parallel computers by means of the kinetics of chemical reaction mechanisms. In the present article we develop this subject further by first investigating the computational properties of several enzymatic (single and multiple) reaction mechanisms: we show their steady states are analogous to either Boolean or fuzzy logic gates. Nearly perfect digital function is obtained only in the regime in which the enzymes are saturated with their substrates. With these enzymatic gates, we construct combinational chemical networks that execute a given truth-table. The dynamic range of a network's output is strongly affected by "input/output matching" conditions among the internal gate elements. We find a simple mechanism, similar to the interconversion of fructose-6-phosphate between its two bisphosphate forms (fructose-1,6-bisphosphate and fructose-2,6-bisphosphate), that functions analogously to an AND gate. When the simple model is supplanted with one in which the enzyme rate laws are derived from experimental data, the steady state of the mechanism functions as an asymmetric fuzzy aggregation operator with properties akin to a fuzzy AND gate. The qualitative behavior of the mechanism does not change when situated within a large model of glycolysis/gluconeogenesis and the TCA cycle. The mechanism, in this case, switches the pathway's mode from glycolysis to gluconeogenesis in response to chemical signals of low blood glucose (cAMP) and abundant fuel for the TCA cycle (acetyl coenzyme A). Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 FIGURE 10 FIGURE 12 FIGURE 13 FIGURE 14 FIGURE 15 FIGURE 16 PMID:7948674

  20. Discrete Wigner functions and quantum computational speedup

    SciTech Connect

    Galvao, Ernesto F.

    2005-04-01

    Gibbons et al. [Phys. Rev. A 70, 062101 (2004)] have recently defined a class of discrete Wigner functions W to represent quantum states in a finite Hilbert space dimension d. I characterize the set C{sub d} of states having non-negative W simultaneously in all definitions of W in this class. For d{<=}5 I show C{sub d} is the convex hull of stabilizer states. This supports the conjecture that negativity of W is necessary for exponential speedup in pure-state quantum computation.

  1. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The

  2. Computational based functional analysis of Bacillus phytases.

    PubMed

    Verma, Anukriti; Singh, Vinay Kumar; Gaur, Smriti

    2016-02-01

    Phytase is an enzyme which catalyzes the total hydrolysis of phytate to less phosphorylated myo-inositol derivatives and inorganic phosphate and digests the undigestable phytate part present in seeds and grains and therefore provides digestible phosphorus, calcium and other mineral nutrients. Phytases are frequently added to the feed of monogastric animals so that bioavailability of phytic acid-bound phosphate increases, ultimately enhancing the nutritional value of diets. The Bacillus phytase is very suitable to be used in animal feed because of its optimum pH with excellent thermal stability. Present study is aimed to perform an in silico comparative characterization and functional analysis of phytases from Bacillus amyloliquefaciens to explore physico-chemical properties using various bio-computational tools. All proteins are acidic and thermostable and can be used as suitable candidates in the feed industry. PMID:26672917

  3. A probabilistic, non-parametric framework for inter-modality label fusion.

    PubMed

    Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen

    2013-01-01

    Multi-atlas techniques are commonplace in medical image segmentation due to their high performance and ease of implementation. Locally weighting the contributions from the different atlases in the label fusion process can improve the quality of the segmentation. However, how to define these weights in a principled way in inter-modality scenarios remains an open problem. Here we propose a label fusion scheme that does not require voxel intensity consistency between the atlases and the target image to segment. The method is based on a generative model of image data in which each intensity in the atlases has an associated conditional distribution of corresponding intensities in the target. The segmentation is computed using variational expectation maximization (VEM) in a Bayesian framework. The method was evaluated with a dataset of eight proton density weighted brain MRI scans with nine labeled structures of interest. The results show that the algorithm outperforms majority voting and a recently published inter-modality label fusion algorithm. PMID:24505808

  4. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    PubMed

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. PMID:25817475

  5. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  6. A non-parametric method for measuring the local dark matter density

    NASA Astrophysics Data System (ADS)

    Silverwood, H.; Sivertsson, S.; Steger, P.; Read, J. I.; Bertone, G.

    2016-04-01

    We present a new method for determining the local dark matter density using kinematic data for a population of tracer stars. The Jeans equation in the z-direction is integrated to yield an equation that gives the velocity dispersion as a function of the total mass density, tracer density, and the `tilt' term that describes the coupling of vertical and radial motions. We then fit a dark matter mass profile to tracer density and velocity dispersion data to derive credible regions on the vertical dark matter density profile. Our method avoids numerical differentiation, leading to lower numerical noise, and is able to deal with the tilt term while remaining one dimensional. In this study we present the method and perform initial tests on idealised mock data. We also demonstrate the importance of dealing with the tilt term for tracers that sample ≳ 1 kpc above the disc plane. If ignored, this results in a systematic underestimation of the dark matter density.

  7. Non-parametric Single View Reconstruction of Curved Objects Using Convex Optimization

    NASA Astrophysics Data System (ADS)

    Oswald, Martin R.; Töppe, Eno; Kolev, Kalin; Cremers, Daniel

    We propose a convex optimization framework delivering intuitive and reasonable 3D meshes from a single photograph. For a given input image, the user can quickly obtain a segmentation of the object in question. Our algorithm then automatically generates an admissible closed surface of arbitrary topology without the requirement of tedious user input. Moreover we provide a tool by which the user is able to interactively modify the result afterwards through parameters and simple operations in a 2D image space. The algorithm targets a limited but relevant class of real world objects. The object silhouette and the additional user input enter a functional which can be optimized globally in a few seconds using recently developed convex relaxation techniques parallelized on state-of-the-art graphics hardware.

  8. A non-parametric method for measuring the local dark matter density

    NASA Astrophysics Data System (ADS)

    Silverwood, H.; Sivertsson, S.; Steger, P.; Read, J. I.; Bertone, G.

    2016-07-01

    We present a new method for determining the local dark matter density using kinematic data for a population of tracer stars. The Jeans equation in the z-direction is integrated to yield an equation that gives the velocity dispersion as a function of the total mass density, tracer density, and the `tilt' term that describes the coupling of vertical and radial motions. We then fit a dark matter mass profile to tracer density and velocity dispersion data to derive credible regions on the vertical dark matter density profile. Our method avoids numerical differentiation, leading to lower numerical noise, and is able to deal with the tilt term while remaining one dimensional. In this study we present the method and perform initial tests on idealized mock data. We also demonstrate the importance of dealing with the tilt term for tracers that sample ≳1 kpc above the disc plane. If ignored, this results in a systematic underestimation of the dark matter density.

  9. Investigation of the dynamic stress–strain response of compressible polymeric foam using a non-parametric analysis

    DOE PAGESBeta

    Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.

    2016-01-25

    Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less

  10. Non Parametric Determination of Acceleration Characteristics in Supernova Shocks Based on Spectra of Cosmic Rays and Remnant Radiation

    NASA Astrophysics Data System (ADS)

    Petrosian, Vahe

    2016-07-01

    We have developed an inversion method for determination of the characteristics of the acceleration mechanism directly and non-parametrically from observations, in contrast to the usual forward fitting of parametric model variables to observations. This is done in the frame work of the so-called leaky box model of acceleration, valid for isotropic momentum distribution and for volume integrated characteristics in a finite acceleration site. We consider both acceleration by shocks and stochastic acceleration where turbulence plays the primary role to determine the acceleration, scattering and escape rates. Assuming a knowledge of the background plasma the model has essentially two unknown parameters, namely the momentum and pitch angle scattering diffusion coefficients, which can be evaluated given two independent spectral observations. These coefficients are obtained directly from the spectrum of radiation from the supernova remnants (SNRs), which gives the spectrum of accelerated particles, and the observed spectrum of cosmic rays (CRs), which are related to the spectrum of particles escaping the SNRs. The results obtained from application of this method will be presented.

  11. Non-parametric Bayesian graph models reveal community structure in resting state fMRI.

    PubMed

    Andersen, Kasper Winther; Madsen, Kristoffer H; Siebner, Hartwig Roman; Schmidt, Mikkel N; Mørup, Morten; Hansen, Lars Kai

    2014-10-15

    Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite Diagonal Model (IDM). The models define probabilities of generating links within and between clusters and the difference between the models lies in the restrictions they impose upon the between-cluster link probabilities. IRM is the most flexible model with no restrictions on the probabilities of links between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background noise probability. These probabilistic models are compared against three other approaches for node clustering, namely Infomap, Louvain modularity, and hierarchical clustering. Using 3 different datasets comprising healthy volunteers' rs-fMRI we found that the BCD model was in general the most predictive and reproducible model. This suggests that rs-fMRI data exhibits community structure and furthermore points to the significance of modeling heterogeneous between-cluster link probabilities. PMID:24914522

  12. Functional requirements for gas characterization system computer software

    SciTech Connect

    Tate, D.D.

    1996-01-01

    This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel.

  13. The Signaling Petri Net-Based Simulator: A Non-Parametric Strategy for Characterizing the Dynamics of Cell-Specific Signaling Networks

    PubMed Central

    Ruths, Derek; Muller, Melissa; Tseng, Jen-Te; Nakhleh, Luay; Ram, Prahlad T.

    2008-01-01

    Reconstructing cellular signaling networks and understanding how they work are major endeavors in cell biology. The scale and complexity of these networks, however, render their analysis using experimental biology approaches alone very challenging. As a result, computational methods have been developed and combined with experimental biology approaches, producing powerful tools for the analysis of these networks. These computational methods mostly fall on either end of a spectrum of model parameterization. On one end is a class of structural network analysis methods; these typically use the network connectivity alone to generate hypotheses about global properties. On the other end is a class of dynamic network analysis methods; these use, in addition to the connectivity, kinetic parameters of the biochemical reactions to predict the network's dynamic behavior. These predictions provide detailed insights into the properties that determine aspects of the network's structure and behavior. However, the difficulty of obtaining numerical values of kinetic parameters is widely recognized to limit the applicability of this latter class of methods. Several researchers have observed that the connectivity of a network alone can provide significant insights into its dynamics. Motivated by this fundamental observation, we present the signaling Petri net, a non-parametric model of cellular signaling networks, and the signaling Petri net-based simulator, a Petri net execution strategy for characterizing the dynamics of signal flow through a signaling network using token distribution and sampling. The result is a very fast method, which can analyze large-scale networks, and provide insights into the trends of molecules' activity-levels in response to an external stimulus, based solely on the network's connectivity. We have implemented the signaling Petri net-based simulator in the PathwayOracle toolkit, which is publicly available at http://bioinfo.cs.rice.edu/pathwayoracle. Using

  14. TRANSIT TIMING OBSERVATIONS FROM KEPLER. II. CONFIRMATION OF TWO MULTIPLANET SYSTEMS VIA A NON-PARAMETRIC CORRELATION ANALYSIS

    SciTech Connect

    Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others

    2012-05-10

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  15. Methodological study of affine transformations of gene expression data with proposed robust non-parametric multi-dimensional normalization method

    PubMed Central

    Bengtsson, Henrik; Hössjer, Ola

    2006-01-01

    Background Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. Results A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. Conclusion We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is

  16. Transit Timing Observations from Kepler: II. Confirmation of Two Multiplanet Systems via a Non-parametric Correlation Analysis

    SciTech Connect

    Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.

  17. Non-parametric linear regression of discrete Fourier transform convoluted chromatographic peak responses under non-ideal conditions of internal standard method.

    PubMed

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Fahmy, Ossama T; Ragab, Marwa A A

    2010-11-15

    This manuscript discusses the application of chemometrics to the handling of HPLC response data using the internal standard method (ISM). This was performed on a model mixture containing terbutaline sulphate, guaiphenesin, bromhexine HCl, sodium benzoate and propylparaben as an internal standard. Derivative treatment of chromatographic response data of analyte and internal standard was followed by convolution of the resulting derivative curves using 8-points sin x(i) polynomials (discrete Fourier functions). The response of each analyte signal, its corresponding derivative and convoluted derivative data were divided by that of the internal standard to obtain the corresponding ratio data. This was found beneficial in eliminating different types of interferences. It was successfully applied to handle some of the most common chromatographic problems and non-ideal conditions, namely: overlapping chromatographic peaks and very low analyte concentrations. For example, a significant change in the correlation coefficient of sodium benzoate, in case of overlapping peaks, went from 0.9975 to 0.9998 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. Also a significant improvement in the precision and accuracy for the determination of synthetic mixtures and dosage forms in non-ideal cases was achieved. For example, in the case of overlapping peaks guaiphenesin mean recovery% and RSD% went from 91.57, 9.83 to 100.04, 0.78 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. This work also compares the application of Theil's method, a non-parametric regression method, in handling the response ratio data, with the least squares parametric regression method, which is considered the de facto standard method used for regression. Theil's method was found to be superior to the method of least squares as it assumes that errors could occur in both x- and y-directions and

  18. Computer program for Bessel and Hankel functions

    NASA Technical Reports Server (NTRS)

    Kreider, Kevin L.; Saule, Arthur V.; Rice, Edward J.; Clark, Bruce J.

    1991-01-01

    A set of FORTRAN subroutines for calculating Bessel and Hankel functions is presented. The routines calculate Bessel and Hankel functions of the first and second kinds, as well as their derivatives, for wide ranges of integer order and real or complex argument in single or double precision. Depending on the order and argument, one of three evaluation methods is used: the power series definition, an Airy function expansion, or an asymptotic expansion. Routines to calculate Airy functions and their derivatives are also included.

  19. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  20. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the

  1. Some computational techniques for estimating human operator describing functions

    NASA Technical Reports Server (NTRS)

    Levison, W. H.

    1986-01-01

    Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.

  2. Computer Use and the Relation between Age and Cognitive Functioning

    ERIC Educational Resources Information Center

    Soubelet, Andrea

    2012-01-01

    This article investigates whether computer use for leisure could mediate or moderate the relations between age and cognitive functioning. Findings supported smaller age differences in measures of cognitive functioning for people who reported spending more hours using a computer. Because of the cross-sectional design of the study, two alternative…

  3. Pair correlation function integrals: Computation and use

    NASA Astrophysics Data System (ADS)

    Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens

    2011-08-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.

  4. Singular Function Integration in Computational Physics

    NASA Astrophysics Data System (ADS)

    Hasbun, Javier

    2009-03-01

    In teaching computational methods in the undergraduate physics curriculum, standard integration approaches taught include the rectangular, trapezoidal, Simpson, Romberg, and others. Over time, these techniques have proven to be invaluable and students are encouraged to employ the most efficient method that is expected to perform best when applied to a given problem. However, some physics research applications require techniques that can handle singularities. While decreasing the step size in traditional approaches is an alternative, this may not always work and repetitive processes make this route even more inefficient. Here, I present two existing integration rules designed to handle singular integrals. I compare them to traditional rules as well as to the exact analytic results. I suggest that it is perhaps time to include such approaches in the undergraduate computational physics course.

  5. Basic mathematical function libraries for scientific computation

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.

  6. The Computer and Its Functions; How to Communicate with the Computer.

    ERIC Educational Resources Information Center

    Ward, Peggy M.

    A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…

  7. Inaccuracies of trigonometric functions in computer mathematical libraries

    NASA Astrophysics Data System (ADS)

    Ito, Takashi; Kojima, Sadamu

    Recent progress in the development of high speed computers has enabled us to perform larger and faster numerical experiments in astronomy. However, sometimes the high speed of numerical computation is achieved at the cost of accuracy. In this paper we show an example of accuracy loss by some mathematical functions on certain computer platforms in Astronomical Data Analysis Center, National Astronomical Observatory of Japan. We focus in particular on the numerical inaccuracy in sine and cosine functions, demonstrating how accuracy deterioration emerges. We also describe the measures that we have so far taken against these numerical inaccuracies. In general, computer vendors are not eager to improve the numerical accuracy in the mathematical libraries that they are supposed to be responsible for. Therefore scientists have to be aware of the existence of numerical inaccuracies, and protect their computational results from contamination by the potential errors that many computer platforms inherently contain.

  8. When the Single Matters more than the Group (II): Addressing the Problem of High False Positive Rates in Single Case Voxel Based Morphometry Using Non-parametric Statistics

    PubMed Central

    Scarpazza, Cristina; Nichols, Thomas E.; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea

    2016-01-01

    In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used

  9. Evaluation of climate change on flood event by using parametric T-test and non-parametric Mann-Kendall test in Barcelonnette basin, France

    NASA Astrophysics Data System (ADS)

    Ramesh, Azadeh; Glade, Thomas; Malet, Jean-Philippe

    2010-09-01

    The existence of a trend in hydrological and meteorological time series is detected by statistical tests. The trend analysis of hydrological and meteorological series is important to consider, because of the effects of global climate change. Parametric or non-parametric statistical tests can be used to decide whether there is a statistically significant trend. In this paper, first a homogeneity analysis was performed by using the non-parametric Bartlett test. Then, trend detection was estimated by using non-parametric Mann-Kendall test. The null hypothesis in the Mann-Kendall test is that the data are independent and randomly ordered. The result of Mann-Kendall test was compared with the parametric T-Test for finding the existence of trend. To reach this purpose, the significance of trends was analyzed on monthly data of Ubaye river in Barcelonnette watershed in southeast of France at an elevation of 1132 m (3717 ft) during the period from 1928 to 2009 bases with the nonparametric Mann-Kendall test and parametric T-Test for river discharge and for meteorological data. The result shows that a rainfall event does not necessarily have an immediate impact on discharge. Visual inspection suggests that the correlation between observations made at the same time point is not very strong. In the results of the trend tests the p-value of the discharge is slightly smaller than the p-value of the precipitation but it seems that in both there is no statistically significant trend. In statistical hypothesis testing, a test statistic is a numerical summary of a set of data that reduces the data to one or a small number of values that can be used to perform a hypothesis test. Statistical hypothesis testing is determined if there is a significant trend or not. Negative test statistics and MK test in both precipitation and discharge data indicate downward trends. As conclusion we can say extreme flood event during recent years is strongly depending on: 1) location of the city: It is

  10. Examining Functions in Mathematics and Science Using Computer Interfacing.

    ERIC Educational Resources Information Center

    Walton, Karen Doyle

    1988-01-01

    Introduces microcomputer interfacing as a method for explaining and demonstrating various aspects of the concept of function. Provides three experiments with illustrations and typical computer graphic displays: pendulum motion, pendulum study using two pendulums, and heat absorption and radiation. (YP)

  11. A non-parametric method for automatic determination of P-wave and S-wave arrival times: application to local micro earthquakes

    NASA Astrophysics Data System (ADS)

    Rawles, Christopher; Thurber, Clifford

    2015-08-01

    We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set

  12. The vibro-acoustic analysis of built-up systems using a hybrid method with parametric and non-parametric uncertainties

    NASA Astrophysics Data System (ADS)

    Cicirello, Alice; Langley, Robin S.

    2013-04-01

    An existing hybrid finite element (FE)/statistical energy analysis (SEA) approach to the analysis of the mid- and high frequency vibrations of a complex built-up system is extended here to a wider class of uncertainty modeling. In the original approach, the constituent parts of the system are considered to be either deterministic, and modeled using FE, or highly random, and modeled using SEA. A non-parametric model of randomness is employed in the SEA components, based on diffuse wave theory and the Gaussian Orthogonal Ensemble (GOE), and this enables the mean and variance of second order quantities such as vibrational energy and response cross-spectra to be predicted. In the present work the assumption that the FE components are deterministic is relaxed by the introduction of a parametric model of uncertainty in these components. The parametric uncertainty may be modeled either probabilistically, or by using a non-probabilistic approach such as interval analysis, and it is shown how these descriptions can be combined with the non-parametric uncertainty in the SEA subsystems to yield an overall assessment of the performance of the system. The method is illustrated by application to an example built-up plate system which has random properties, and benchmark comparisons are made with full Monte Carlo simulations.

  13. Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions.

    ERIC Educational Resources Information Center

    O'Callaghan, Brian R.

    1998-01-01

    Describes a research project that examined the effects of the Computer-Intensive Algebra (CIA) and traditional algebra curricula on students' (N=802) understanding of the function concept. Results indicate that CIA students achieved a better understanding of functions and were better at the components of modeling, interpreting, and translating.…

  14. Convergence rate for numerical computation of the lattice Green's function.

    PubMed

    Ghazisaeidi, M; Trinkle, D R

    2009-03-01

    Flexible boundary-condition methods couple an isolated defect to bulk through the bulk lattice Green's function. Direct computation of the lattice Green's function requires projecting out the singular subspace of uniform displacements and forces for the infinite lattice. We calculate the convergence rates for elastically isotropic and anisotropic cases for three different techniques: relative displacement, elastic Green's function correction, and discontinuity correction. The discontinuity correction has the most rapid convergence for the general case. PMID:19392089

  15. Wigner Function Negativity and Contextuality in Quantum Computation on Rebits

    NASA Astrophysics Data System (ADS)

    Delfosse, Nicolas; Allard Guerin, Philippe; Bian, Jacob; Raussendorf, Robert

    2015-04-01

    We describe a universal scheme of quantum computation by state injection on rebits (states with real density matrices). For this scheme, we establish contextuality and Wigner function negativity as computational resources, extending results of M. Howard et al. [Nature (London) 510, 351 (2014), 10.1038/nature13460] to two-level systems. For this purpose, we define a Wigner function suited to systems of n rebits and prove a corresponding discrete Hudson's theorem. We introduce contextuality witnesses for rebit states and discuss the compatibility of our result with state-independent contextuality.

  16. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1971-01-01

    An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

  17. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  18. The flight telerobotic servicer: From functional architecture to computer architecture

    NASA Technical Reports Server (NTRS)

    Lumia, Ronald; Fiala, John

    1989-01-01

    After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.

  19. Computational approaches for rational design of proteins with novel functionalities

    PubMed Central

    Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul

    2012-01-01

    Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643

  20. A large-scale evaluation of computational protein function prediction.

    PubMed

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools. PMID:23353650

  1. DegPack: a web package using a non-parametric and information theoretic algorithm to identify differentially expressed genes in multiclass RNA-seq samples.

    PubMed

    An, Jaehyun; Kim, Kwangsoo; Chae, Heejoon; Kim, Sun

    2014-10-01

    Gene expression in the whole cell can be routinely measured by microarray technologies or recently by using sequencing technologies. Using these technologies, identifying differentially expressed genes (DEGs) among multiple phenotypes is the very first step to understand difference between phenotypes. Thus many methods for detecting DEGs between two groups have been developed. For example, T-test and relative entropy are used for detecting difference between two probability distributions. When more than two phenotypes are considered, these methods are not applicable and other methods such as ANOVA F-test and Kruskal-Wallis are used for finding DEGs in the multiclass data. However, ANOVA F-test assumes a normal distribution and it is not designed to identify DEGs where genes are expressed distinctively in each of phenotypes. Kruskal-Wallis method, a non-parametric method, is more robust but sensitive to outliers. In this paper, we propose a non-parametric and information theoretical approach for identifying DEGs. Our method identified DEGs effectively and it is shown less sensitive to outliers in two data sets: a three-class drought resistant rice data set and a three-class breast cancer data set. In extensive experiments with simulated and real data, our method was shown to outperform existing tools in terms of accuracy of characterizing phenotypes using DEGs. A web service is implemented at http://biohealth.snu.ac.kr/software/degpack for the analysis of multi-class data and it includes SAMseq and PoissonSeq methods in addition to the method described in this paper. PMID:24981074

  2. Computational design of proteins with novel structure and functions

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Lu-Hua, Lai

    2016-01-01

    Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence-structure-function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein-protein interactions. Challenges and future prospects of this field are also discussed. Project supported by the National Basic Research Program of China (Grant No. 2015CB910300), the National High Technology Research and Development Program of China (Grant No. 2012AA020308), and the National Natural Science Foundation of China (Grant No. 11021463).

  3. Quantum Computing Without Wavefunctions: Time-Dependent Density Functional Theory for Universal Quantum Computation

    PubMed Central

    Tempel, David G.; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483

  4. Quantum computing without wavefunctions: time-dependent density functional theory for universal quantum computation.

    PubMed

    Tempel, David G; Aspuru-Guzik, Alán

    2012-01-01

    We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms. PMID:22553483

  5. SNAP: A computer program for generating symbolic network functions

    NASA Technical Reports Server (NTRS)

    Lin, P. M.; Alderson, G. E.

    1970-01-01

    The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.

  6. Robust Computation of Morse-Smale Complexes of Bilinear Functions

    SciTech Connect

    Norgard, G; Bremer, P T

    2010-11-30

    The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.

  7. Computer program for calculating and fitting thermodynamic functions

    NASA Technical Reports Server (NTRS)

    Mcbride, Bonnie J.; Gordon, Sanford

    1992-01-01

    A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.

  8. Computing the hadronic vacuum polarization function by analytic continuation

    DOE PAGESBeta

    Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; Jansen, Karl; Petschlies, Marcus; Renner, Dru B.

    2013-08-29

    We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the spacelike and timelike regions. We provide two independent demonstrations to show that this method leads to the desired hadronic vacuum polarization function in Minkowski spacetime. We present with the example of the leading-order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.

  9. A Survey of Computational Intelligence Techniques in Protein Function Prediction

    PubMed Central

    Tiwari, Arvind Kumar; Srivastava, Rajeev

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395

  10. Computation of three-dimensional flows using two stream functions

    NASA Technical Reports Server (NTRS)

    Greywall, Mahesh S.

    1991-01-01

    An approach to compute 3-D flows using two stream functions is presented. The method generates a boundary fitted grid as part of its solution. Commonly used two steps for computing the flow fields are combined into a single step in the present approach: (1) boundary fitted grid generation; and (2) solution of Navier-Stokes equations on the generated grid. The presented method can be used to directly compute 3-D viscous flows, or the potential flow approximation of this method can be used to generate grids for other algorithms to compute 3-D viscous flows. The independent variables used are chi, a spatial coordinate, and xi and eta, values of stream functions along two sets of suitably chosen intersecting stream surfaces. The dependent variables used are the streamwise velocity, and two functions that describe the stream surfaces. Since for a 3-D flow there is no unique way to define two sets of intersecting stream surfaces to cover the given flow, different types of two sets of intersecting stream surfaces are considered. First, the metric of the (chi, xi, eta) curvilinear coordinate system associated with each type is presented. Next, equations for the steady state transport of mass, momentum, and energy are presented in terms of the metric of the (chi, xi, eta) coordinate system. Also included are the inviscid and the parabolized approximations to the general transport equations.

  11. Integrated command, control, communications and computation system functional architecture

    NASA Technical Reports Server (NTRS)

    Cooley, C. G.; Gilbert, L. E.

    1981-01-01

    The functional architecture for an integrated command, control, communications, and computation system applicable to the command and control portion of the NASA End-to-End Data. System is described including the downlink data processing and analysis functions required to support the uplink processes. The functional architecture is composed of four elements: (1) the functional hierarchy which provides the decomposition and allocation of the command and control functions to the system elements; (2) the key system features which summarize the major system capabilities; (3) the operational activity threads which illustrate the interrelationahip between the system elements; and (4) the interfaces which illustrate those elements that originate or generate data and those elements that use the data. The interfaces also provide a description of the data and the data utilization and access techniques.

  12. Optimization of removal function in computer controlled optical surfacing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Guo, Peiji; Ren, Jianfeng

    2010-10-01

    The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high

  13. Time-Dependent Density Functional Theory for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Tempel, David

    2015-03-01

    In this talk, I will discuss how the theorems of TDDFT can be applied to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, I will discuss how TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions.

  14. Computational predictions of energy materials using density functional theory

    NASA Astrophysics Data System (ADS)

    Jain, Anubhav; Shin, Yongwoo; Persson, Kristin A.

    2016-01-01

    In the search for new functional materials, quantum mechanics is an exciting starting point. The fundamental laws that govern the behaviour of electrons have the possibility, at the other end of the scale, to predict the performance of a material for a targeted application. In some cases, this is achievable using density functional theory (DFT). In this Review, we highlight DFT studies predicting energy-related materials that were subsequently confirmed experimentally. The attributes and limitations of DFT for the computational design of materials for lithium-ion batteries, hydrogen production and storage materials, superconductors, photovoltaics and thermoelectric materials are discussed. In the future, we expect that the accuracy of DFT-based methods will continue to improve and that growth in computing power will enable millions of materials to be virtually screened for specific applications. Thus, these examples represent a first glimpse of what may become a routine and integral step in materials discovery.

  15. Optimized Kaiser-Bessel Window Functions for Computed Tomography.

    PubMed

    Nilchian, Masih; Ward, John Paul; Vonesch, Cedric; Unser, Michael

    2015-11-01

    Kaiser-Bessel window functions are frequently used to discretize tomographic problems because they have two desirable properties: 1) their short support leads to a low computational cost and 2) their rotational symmetry makes their imaging transform independent of the direction. In this paper, we aim at optimizing the parameters of these basis functions. We present a formalism based on the theory of approximation and point out the importance of the partition-of-unity condition. While we prove that, for compact-support functions, this condition is incompatible with isotropy, we show that minimizing the deviation from the partition of unity condition is highly beneficial. The numerical results confirm that the proposed tuning of the Kaiser-Bessel window functions yields the best performance. PMID:26151939

  16. Computer Code For Calculation Of The Mutual Coherence Function

    NASA Astrophysics Data System (ADS)

    Bugnolo, Dimitri S.

    1986-05-01

    We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.

  17. Computations involving differential operators and their actions on functions

    NASA Technical Reports Server (NTRS)

    Crouch, Peter E.; Grossman, Robert; Larson, Richard

    1991-01-01

    The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.

  18. Functional imaging of the brain using computed tomography.

    PubMed

    Berninger, W H; Axel, L; Norman, D; Napel, S; Redington, R W

    1981-03-01

    Data from rapid-sequence CT scans of the same cross section, obtained following bolus injection of contrast material, were analyzed by functional imaging. The information contained in a large number of images can be compressed into one or two gray-scale images which can be evaluated both qualitatively and quantitatively. The computational techniques are described and applied to the generation of images depicting bolus transit time, arrival time, peak time, and effective width. PMID:7465851

  19. Computational aspects of the continuum quaternionic wave functions for hydrogen

    SciTech Connect

    Morais, J.

    2014-10-15

    Over the past few years considerable attention has been given to the role played by the Hydrogen Continuum Wave Functions (HCWFs) in quantum theory. The HCWFs arise via the method of separation of variables for the time-independent Schrödinger equation in spherical coordinates. The HCWFs are composed of products of a radial part involving associated Laguerre polynomials multiplied by exponential factors and an angular part that is the spherical harmonics. In the present paper we introduce the continuum wave functions for hydrogen within quaternionic analysis ((R)QHCWFs), a result which is not available in the existing literature. In particular, the underlying functions are of three real variables and take on either values in the reduced and full quaternions (identified, respectively, with R{sup 3} and R{sup 4}). We prove that the (R)QHCWFs are orthonormal to one another. The representation of these functions in terms of the HCWFs are explicitly given, from which several recurrence formulae for fast computer implementations can be derived. A summary of fundamental properties and further computation of the hydrogen-like atom transforms of the (R)QHCWFs are also discussed. We address all the above and explore some basic facts of the arising quaternionic function theory. As an application, we provide the reader with plot simulations that demonstrate the effectiveness of our approach. (R)QHCWFs are new in the literature and have some consequences that are now under investigation.

  20. INTEGRATING COMPUTATIONAL PROTEIN FUNCTION PREDICTION INTO DRUG DISCOVERY INITIATIVES

    PubMed Central

    Grant, Marianne A.

    2014-01-01

    Pharmaceutical researchers must evaluate vast numbers of protein sequences and formulate innovative strategies for identifying valid targets and discovering leads against them as a way of accelerating drug discovery. The ever increasing number and diversity of novel protein sequences identified by genomic sequencing projects and the success of worldwide structural genomics initiatives have spurred great interest and impetus in the development of methods for accurate, computationally empowered protein function prediction and active site identification. Previously, in the absence of direct experimental evidence, homology-based protein function annotation remained the gold-standard for in silico analysis and prediction of protein function. However, with the continued exponential expansion of sequence databases, this approach is not always applicable, as fewer query protein sequences demonstrate significant homology to protein gene products of known function. As a result, several non-homology based methods for protein function prediction that are based on sequence features, structure, evolution, biochemical and genetic knowledge have emerged. Herein, we review current bioinformatic programs and approaches for protein function prediction/annotation and discuss their integration into drug discovery initiatives. The development of such methods to annotate protein functional sites and their application to large protein functional families is crucial to successfully utilizing the vast amounts of genomic sequence information available to drug discovery and development processes. PMID:25530654

  1. Preprocessing functions for computed radiography images in a PACS environment

    NASA Astrophysics Data System (ADS)

    McNitt-Gray, Michael F.; Pietka, Ewa; Huang, H. K.

    1992-05-01

    In a picture archiving and communications system (PACS), images are acquired from several modalities including computed radiography (CR). This modality has unique image characteristics and presents several problems that need to be resolved before the image is available for viewing at a display workstation. A set of preprocessing functions have been applied to all CR images in a PACS environment to enhance the display of images. The first function reformats CR images that are acquired with different plate sizes to a standard size for display. Another function removes the distracting white background caused by the collimation used at the time of exposure. A third function determines the orientation of each image and rotates those images that are in nonstandard positions into a standard viewing position. Another function creates a default look-up table based on the gray levels actually used by the image (instead of allocated gray levels). Finally, there is a function which creates (for chest images only) the piece-wise linear look-up tables that can be applied to enhance different tissue densities. These functions have all been implemented in a PACS environment. Each of these functions have been very successful in improving the viewing conditions of CR images and contribute to the clinical acceptance of PACS by reducing the effort required to display CR images.

  2. Non-parametric representation and prediction of single- and multi-shell diffusion-weighted MRI data using Gaussian processes

    PubMed Central

    Andersson, Jesper L.R.; Sotiropoulos, Stamatios N.

    2015-01-01

    Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of “Kriging”. We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell. PMID:26236030

  3. An Exhaustive, Non-Euclidean, Non-Parametric Data Mining Tool for Unraveling the Complexity of Biological Systems – Novel Insights into Malaria

    PubMed Central

    Loucoubar, Cheikh; Paul, Richard; Bar-Hen, Avner; Huret, Augustin; Tall, Adama; Sokhna, Cheikh; Trape, Jean-François; Ly, Alioune Badara; Faye, Joseph; Badiane, Abdoulaye; Diakhaby, Gaoussou; Sarr, Fatoumata Diène; Diop, Aliou; Sakuntabhai, Anavaj; Bureau, Jean-François

    2011-01-01

    Complex, high-dimensional data sets pose significant analytical challenges in the post-genomic era. Such data sets are not exclusive to genetic analyses and are also pertinent to epidemiology. There has been considerable effort to develop hypothesis-free data mining and machine learning methodologies. However, current methodologies lack exhaustivity and general applicability. Here we use a novel non-parametric, non-euclidean data mining tool, HyperCube®, to explore exhaustively a complex epidemiological malaria data set by searching for over density of events in m-dimensional space. Hotspots of over density correspond to strings of variables, rules, that determine, in this case, the occurrence of Plasmodium falciparum clinical malaria episodes. The data set contained 46,837 outcome events from 1,653 individuals and 34 explanatory variables. The best predictive rule contained 1,689 events from 148 individuals and was defined as: individuals present during 1992–2003, aged 1–5 years old, having hemoglobin AA, and having had previous Plasmodium malariae malaria parasite infection ≤10 times. These individuals had 3.71 times more P. falciparum clinical malaria episodes than the general population. We validated the rule in two different cohorts. We compared and contrasted the HyperCube® rule with the rules using variables identified by both traditional statistical methods and non-parametric regression tree methods. In addition, we tried all possible sub-stratified quantitative variables. No other model with equal or greater representativity gave a higher Relative Risk. Although three of the four variables in the rule were intuitive, the effect of number of P. malariae episodes was not. HyperCube® efficiently sub-stratified quantitative variables to optimize the rule and was able to identify interactions among the variables, tasks not easy to perform using standard data mining methods. Search of local over density in m-dimensional space, explained by easily

  4. Green's Function Analysis of Periodic Structures in Computational Electromagnetics

    NASA Astrophysics Data System (ADS)

    Van Orden, Derek

    2011-12-01

    Periodic structures are used widely in electromagnetic devices, including filters, waveguiding structures, and antennas. Their electromagnetic properties may be analyzed computationally by solving an integral equation, in which an unknown equivalent current distribution in a single unit cell is convolved with a periodic Green's function that accounts for the system's boundary conditions. Fast computation of the periodic Green's function is therefore essential to achieve high accuracy solutions of complicated periodic structures, including analysis of modal wave propagation and scattering from external sources. This dissertation first presents alternative spectral representations of the periodic Green's function of the Helmholtz equation for cases of linear periodic systems in 2D and 3D free space and near planarly layered media. Although there exist multiple representations of the periodic Green's function, most are not efficient in the important case where the fields are observed near the array axis. We present spectral-spatial representations for rapid calculation of the periodic Green's functions for linear periodic arrays of current sources residing in free space as well as near a planarly layered medium. They are based on the integral expansion of the periodic Green's functions in terms of the spectral parameters transverse to the array axis. These schemes are important for the rapid computation of the interaction among unit cells of a periodic array, and, by extension, the complex dispersion relations of guided waves. Extensions of this approach to planar periodic structures are discussed. With these computation tools established, we study the traveling wave properties of linear resonant arrays placed near surfaces, and examine the coupling mechanisms that lead to radiation into guided waves supported by the surface. This behavior is especially important to understand the properties of periodic structures printed on dielectric substrates, such as periodic

  5. On the Hydrodynamic Function of Sharkskin: A Computational Investigation

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Sotiropoulos, Fotis

    2014-11-01

    Denticles (placoid scales) are small structures that cover the epidermis of some sharks. The hydrodynamic function of denticles is unclear. Because they resemble riblets, they have been thought to passively reduce skin-friction-for which there is some experimental evidence. Others have experimentally shown that denticles increase skin-friction and have hypothesized that denticles act as vortex generators to delay separation. To help clarify their function, we use high-resolution large eddy and direct numerical simulations, with an immersed boundary method, to simulate flow patterns past and calculate the drag force on Mako Short Fin denticles. Simulations are carried out for the denticles placed in a canonical turbulent boundary layer as well as in the vicinity of a separation bubble. The computed results elucidate the three-dimensional structure of the flow around denticles and provide insights into the hydrodynamic function of sharkskin.

  6. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation. PMID:20426075

  7. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  8. Analog computation of auto and cross-correlation functions

    NASA Technical Reports Server (NTRS)

    1974-01-01

    For analysis of the data obtained from the cross beam systems it was deemed desirable to compute the auto- and cross-correlation functions by both digital and analog methods to provide a cross-check of the analysis methods and an indication as to which of the two methods would be most suitable for routine use in the analysis of such data. It is the purpose of this appendix to provide a concise description of the equipment and procedures used for the electronic analog analysis of the cross beam data. A block diagram showing the signal processing and computation set-up used for most of the analog data analysis is provided. The data obtained at the field test sites were recorded on magnetic tape using wide-band FM recording techniques. The data as recorded were band-pass filtered by electronic signal processing in the data acquisition systems.

  9. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Astrophysics Data System (ADS)

    Pohorille, Andrew

    2000-03-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells), the most direct way to test ourunderstanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform protocellular functions. Many of these functions, such as import of nutrients, capture and storage of energy, and response to changes in the environment are carried out by proteins bound to membranes. We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides)organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (e.g. channels), and (c) by what mechanisms such aggregates perform essential protocellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each atom in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10^6-10^8 time steps.

  10. Computer Modeling of the Earliest Cellular Structures and Functions

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  11. Complete RNA inverse folding: computational design of functional hammerhead ribozymes

    PubMed Central

    Dotu, Ivan; Garcia-Martin, Juan Antonio; Slinger, Betty L.; Mechery, Vinodh; Meyer, Michelle M.; Clote, Peter

    2014-01-01

    Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. PMID:25209235

  12. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    PubMed

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample. PMID:12220133

  13. SOPIE: an R package for the non-parametric estimation of the off-pulse interval of a pulsar light curve

    NASA Astrophysics Data System (ADS)

    Schutte, Willem D.; Swanepoel, Jan W. H.

    2016-09-01

    An automated tool to derive the off-pulse interval of a light curve originating from a pulsar is needed. First, we derive a powerful and accurate non-parametric sequential estimation technique to estimate the off-pulse interval of a pulsar light curve in an objective manner. This is in contrast to the subjective `eye-ball' (visual) technique, and complementary to the Bayesian Block method which is currently used in the literature. The second aim involves the development of a statistical package, necessary for the implementation of our new estimation technique. We develop a statistical procedure to estimate the off-pulse interval in the presence of noise. It is based on a sequential application of p-values obtained from goodness-of-fit tests for uniformity. The Kolmogorov-Smirnov, Cramér-von Mises, Anderson-Darling and Rayleigh test statistics are applied. The details of the newly developed statistical package SOPIE (Sequential Off-Pulse Interval Estimation) are discussed. The developed estimation procedure is applied to simulated and real pulsar data. Finally, the SOPIE estimated off-pulse intervals of two pulsars are compared to the estimates obtained with the Bayesian Block method and yield very satisfactory results. We provide the code to implement the SOPIE package, which is publicly available at http://CRAN.R-project.org/package=SOPIE (Schutte).

  14. HOMOGENEOUS UGRIZ PHOTOMETRY FOR ACS VIRGO CLUSTER SURVEY GALAXIES: A NON-PARAMETRIC ANALYSIS FROM SDSS IMAGING

    SciTech Connect

    Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.

    2010-11-15

    We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.

  15. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  16. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  17. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  18. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  19. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...

  20. Non-functioning adrenal adenomas discovered incidentally on computed tomography

    SciTech Connect

    Mitnick, J.S.; Bosniak, M.A.; Megibow, A.J.; Naidich, D.P.

    1983-08-01

    Eighteen patients with unilateral non-metastatic non-functioning adrenal masses were studied with computed tomography (CT). Pathological examination in cases revealed benign adrenal adenomas. The others were followed up with serial CT scans and found to show no change in tumor size over a period of six months to three years. On the basis of these findings, the authors suggest certain criteria of a benign adrenal mass, including (a) diameter less than 5 cm, (b) smooth contour, (c) well-defined margin, and (d) no change in size on follow-up. Serial CT scanning can be used as an alternative to surgery in the management of many of these patients.

  1. Computing the effective action with the functional renormalization group

    NASA Astrophysics Data System (ADS)

    Codello, Alessandro; Percacci, Roberto; Rachwał, Lesław; Tonero, Alberto

    2016-04-01

    The "exact" or "functional" renormalization group equation describes the renormalization group flow of the effective average action Γ _k. The ordinary effective action Γ _0 can be obtained by integrating the flow equation from an ultraviolet scale k=Λ down to k=0. We give several examples of such calculations at one-loop, both in renormalizable and in effective field theories. We reproduce the four-point scattering amplitude in the case of a real scalar field theory with quartic potential and in the case of the pion chiral Lagrangian. In the case of gauge theories, we reproduce the vacuum polarization of QED and of Yang-Mills theory. We also compute the two-point functions for scalars and gravitons in the effective field theory of scalar fields minimally coupled to gravity.

  2. An Atomistic Statistically Effective Energy Function for Computational Protein Design.

    PubMed

    Topham, Christopher M; Barbe, Sophie; André, Isabelle

    2016-08-01

    Shortcomings in the definition of effective free-energy surfaces of proteins are recognized to be a major contributory factor responsible for the low success rates of existing automated methods for computational protein design (CPD). The formulation of an atomistic statistically effective energy function (SEEF) suitable for a wide range of CPD applications and its derivation from structural data extracted from protein domains and protein-ligand complexes are described here. The proposed energy function comprises nonlocal atom-based and local residue-based SEEFs, which are coupled using a novel atom connectivity number factor to scale short-range, pairwise, nonbonded atomic interaction energies and a surface-area-dependent cavity energy term. This energy function was used to derive additional SEEFs describing the unfolded-state ensemble of any given residue sequence based on computed average energies for partially or fully solvent-exposed fragments in regions of irregular structure in native proteins. Relative thermal stabilities of 97 T4 bacteriophage lysozyme mutants were predicted from calculated energy differences for folded and unfolded states with an average unsigned error (AUE) of 0.84 kcal mol(-1) when compared to experiment. To demonstrate the utility of the energy function for CPD, further validation was carried out in tests of its capacity to recover cognate protein sequences and to discriminate native and near-native protein folds, loop conformers, and small-molecule ligand binding poses from non-native benchmark decoys. Experimental ligand binding free energies for a diverse set of 80 protein complexes could be predicted with an AUE of 2.4 kcal mol(-1) using an additional energy term to account for the loss in ligand configurational entropy upon binding. The atomistic SEEF is expected to improve the accuracy of residue-based coarse-grained SEEFs currently used in CPD and to extend the range of applications of extant atom-based protein statistical

  3. Computation of the lattice Green function for a dislocation

    NASA Astrophysics Data System (ADS)

    Tan, Anne Marie Z.; Trinkle, Dallas R.

    2016-08-01

    Modeling isolated dislocations is challenging due to their long-ranged strain fields. Flexible boundary condition methods capture the correct long-range strain field of a defect by coupling the defect core to an infinite harmonic bulk through the lattice Green function (LGF). To improve the accuracy and efficiency of flexible boundary condition methods, we develop a numerical method to compute the LGF specifically for a dislocation geometry; in contrast to previous methods, where the LGF was computed for the perfect bulk as an approximation for the dislocation. Our approach directly accounts for the topology of a dislocation, and the errors in the LGF computation converge rapidly for edge dislocations in a simple cubic model system as well as in BCC Fe with an empirical potential. When used within the flexible boundary condition approach, the dislocation LGF relaxes dislocation core geometries in fewer iterations than when the perfect bulk LGF is used as an approximation for the dislocation, making a flexible boundary condition approach more efficient.

  4. Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.

    PubMed

    Timmins, Amy; de Visser, Sam P

    2015-01-01

    Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates. PMID:26415843

  5. An Evolutionary Computation Approach to Examine Functional Brain Plasticity.

    PubMed

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A; Hillary, Frank G

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  6. An Evolutionary Computation Approach to Examine Functional Brain Plasticity

    PubMed Central

    Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.

    2016-01-01

    One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength

  7. Computer Modeling of Protocellular Functions: Peptide Insertion in Membranes

    NASA Technical Reports Server (NTRS)

    Rodriquez-Gomez, D.; Darve, E.; Pohorille, A.

    2006-01-01

    Lipid vesicles became the precursors to protocells by acquiring the capabilities needed to survive and reproduce. These include transport of ions, nutrients and waste products across cell walls and capture of energy and its conversion into a chemically usable form. In modem organisms these functions are carried out by membrane-bound proteins (about 30% of the genome codes for this kind of proteins). A number of properties of alpha-helical peptides suggest that their associations are excellent candidates for protobiological precursors of proteins. In particular, some simple a-helical peptides can aggregate spontaneously and form functional channels. This process can be described conceptually by a three-step thermodynamic cycle: 1 - folding of helices at the water-membrane interface, 2 - helix insertion into the lipid bilayer and 3 - specific interactions of these helices that result in functional tertiary structures. Although a crucial step, helix insertion has not been adequately studied because of the insolubility and aggregation of hydrophobic peptides. In this work, we use computer simulation methods (Molecular Dynamics) to characterize the energetics of helix insertion and we discuss its importance in an evolutionary context. Specifically, helices could self-assemble only if their interactions were sufficiently strong to compensate the unfavorable Free Energy of insertion of individual helices into membranes, providing a selection mechanism for protobiological evolution.

  8. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    NASA Astrophysics Data System (ADS)

    Constantinescu, C. C.; Yoder, K. K.; Kareken, D. A.; Bouman, C. A.; O'Connor, S. J.; Normandin, M. D.; Morris, E. D.

    2008-03-01

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest & activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (FDA(t)) and the change in binding potential (ΔBP). The veracity of the FDA(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) ΔBP should decline with increasing DA peak time, (2) ΔBP should increase as the strength of the temporal correlation between FDA(t) and the free raclopride (FRAC(t)) curve increases, (3) ΔBP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [11C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover FDA(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the FDA(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of FDA(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  9. Computation of Multimodal Size-Velocity-Temperature Spray Distribution Functions

    NASA Astrophysics Data System (ADS)

    Archambault, Mark R.

    2002-09-01

    An alternative approach to modeling spray flows-one which does not involve simulation or stochastic integration is to directly compute the evolution of the probability density function (PDF) describing the drops. The purpose of this paper is to continue exploring an alternative method of solving the spray flow problem. The approach is to derive and solve a set of Eulerian moment transport equations for the quantities of interest in the spray, coupled with the appropriate gas-phase (Eulerian) equations. A second purpose is to continue to explore how a maximum-entropy criterion may be used to provide closure for such a moment-based model. The hope is to further develop an Eulerian-Eulerian model that will permit one to solve for detailed droplet statistics directly without the use of stochastic integration or post-averaging of simulations.

  10. Imaging local brain function with emission computed tomography

    SciTech Connect

    Kuhl, D.E.

    1984-03-01

    Positron emission tomography (PET) using /sup 18/F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed.

  11. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  12. Computational Effective Fault Detection by Means of Signature Functions

    PubMed Central

    Baranski, Przemyslaw; Pietrzak, Piotr

    2016-01-01

    The paper presents a computationally effective method for fault detection. A system’s responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system’s response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine’s life. PMID:26949942

  13. Computational Effective Fault Detection by Means of Signature Functions.

    PubMed

    Baranski, Przemyslaw; Pietrzak, Piotr

    2016-01-01

    The paper presents a computationally effective method for fault detection. A system's responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system's response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine's life. PMID:26949942

  14. Assessing executive function using a computer game: computational modeling of cognitive processes.

    PubMed

    Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha

    2014-07-01

    Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home. PMID:25014944

  15. Chemical Visualization of Boolean Functions: A Simple Chemical Computer

    NASA Astrophysics Data System (ADS)

    Blittersdorf, R.; Müller, J.; Schneider, F. W.

    1995-08-01

    We present a chemical realization of the Boolean functions AND, OR, NAND, and NOR with a neutralization reaction carried out in three coupled continuous flow stirred tank reactors (CSTR). Two of these CSTR's are used as input reactors, the third reactor marks the output. The chemical reaction is the neutralization of hydrochloric acid (HCl) with sodium hydroxide (NaOH) in the presence of phenolphtalein as an indicator, which is red in alkaline solutions and colorless in acidic solutions representing the two binary states 1 and 0, respectively. The time required for a "chemical computation" is determined by the flow rate of reactant solutions into the reactors since the neutralization reaction itself is very fast. While the acid flow to all reactors is equal and constant, the flow rate of NaOH solution controls the states of the input reactors. The connectivities between the input and output reactors determine the flow rate of NaOH solution into the output reactor, according to the chosen Boolean function. Thus the state of the output reactor depends on the states of the input reactors.

  16. Computing black hole partition functions from quasinormal modes

    NASA Astrophysics Data System (ADS)

    Arnold, Peter; Szepietowski, Phillip; Vaman, Diana

    2016-07-01

    We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. We then discuss the application of such techniques to more complicated spacetimes.

  17. Non-parametric analysis of the rest-frame UV sizes and morphological disturbance amongst L* galaxies at 4 < z < 8

    NASA Astrophysics Data System (ADS)

    Curtis-Lake, E.; McLure, R. J.; Dunlop, J. S.; Rogers, A. B.; Targett, T.; Dekel, A.; Ellis, R. S.; Faber, S. M.; Ferguson, H. C.; Grogin, N. A.; Kocevski, D. D.; Koekemoer, A. M.; Lai, K.; Mármol-Queraltó, E.; Robertson, B. E.

    2016-03-01

    We present the results of a study investigating the sizes and morphologies of redshift 4 < z < 8 galaxies in the CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey) GOODS-S (Great Observatories Origins Deep Survey southern field), HUDF (Hubble Ultra-Deep Field) and HUDF parallel fields. Based on non-parametric measurements and incorporating a careful treatment of measurement biases, we quantify the typical size of galaxies at each redshift as the peak of the lognormal size distribution, rather than the arithmetic mean size. Parametrizing the evolution of galaxy half-light radius as r50 ∝ (1 + z)n, we find n = -0.20 ± 0.26 at bright UV-luminosities (0.3L*(z = 3) < L < L*) and n = -0.47 ± 0.62 at faint luminosities (0.12L* < L < 0.3L*). Furthermore, simulations based on artificially redshifting our z ˜ 4 galaxy sample show that we cannot reject the null hypothesis of no size evolution. We show that this result is caused by a combination of the size-dependent completeness of high-redshift galaxy samples and the underestimation of the sizes of the largest galaxies at a given epoch. To explore the evolution of galaxy morphology we first compare asymmetry measurements to those from a large sample of simulated single Sérsic profiles, in order to robustly categorize galaxies as either `smooth' or `disturbed'. Comparing the disturbed fraction amongst bright (M1500 ≤ -20) galaxies at each redshift to that obtained by artificially redshifting our z ˜ 4 galaxy sample, while carefully matching the size and UV-luminosity distributions, we find no clear evidence for evolution in galaxy morphology over the redshift interval 4 < z < 8. Therefore, based on our results, a bright (M1500 ≤ -20) galaxy at z ˜ 6 is no more likely to be measured as `disturbed' than a comparable galaxy at z ˜ 4, given the current observational constraints.

  18. Analysis of long term meteorological trends in the middle and lower Indus Basin of Pakistan-A non-parametric statistical approach

    NASA Astrophysics Data System (ADS)

    Ahmad, Waqas; Fatima, Aamira; Awan, Usman Khalid; Anwar, Arif

    2014-11-01

    The Indus basin of Pakistan is vulnerable to climate change which would directly affect the livelihoods of poor people engaged in irrigated agriculture. The situation could be worse in middle and lower part of this basin which occupies 90% of the irrigated area. The objective of this research is to analyze the long term meteorological trends in the middle and lower parts of Indus basin of Pakistan. We used monthly data from 1971 to 2010 and applied non-parametric seasonal Kendal test for trend detection in combination with seasonal Kendall slope estimator to quantify the magnitude of trends. The meteorological parameters considered were mean maximum and mean minimum air temperature, and rainfall from 12 meteorological stations located in the study region. We examined the reliability and spatial integrity of data by mass-curve analysis and spatial correlation matrices, respectively. Analysis was performed for four seasons (spring-March to May, summer-June to August, fall-September to November and winter-December to February). The results show that max. temperature has an average increasing trend of magnitude + 0.16, + 0.03, 0.0 and + 0.04 °C/decade during all the four seasons, respectively. The average trend of min. temperature during the four seasons also increases with magnitude of + 0.29, + 0.12, + 0.36 and + 0.36 °C/decade, respectively. Persistence of the increasing trend is more pronounced in the min. temperature as compared to the max. temperature on annual basis. Analysis of rainfall data has not shown any noteworthy trend during winter, fall and on annual basis. However during spring and summer season, the rainfall trends vary from - 1.15 to + 0.93 and - 3.86 to + 2.46 mm/decade, respectively. It is further revealed that rainfall trends during all seasons are statistically non-significant. Overall the study area is under a significant warming trend with no changes in rainfall.

  19. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers

    PubMed Central

    2012-01-01

    Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12) – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales). An illustration of ordinal item analysis confirmed that all 14

  20. Has DRG payment influenced the technical efficiency and productivity of diagnostic technologies in Portuguese public hospitals? An empirical analysis using parametric and non-parametric methods.

    PubMed

    Dismuke, C E; Sena, V

    1999-05-01

    The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospital-specific and national DRG case-mix rates. Since implementation in 1990, the percentage of each hospital's budget based on hospital specific costs was to decrease, while the percentage based on DRG case-mix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospital-specific cost basis. While all other non-NHS third party payers are currently paying based on DRGs, the adoption of DRG case-mix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a two-stage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992-1994, using both parametric and non-parametric frontier models. We find evidence

  1. HANOIPC3: a computer program to evaluate executive functions.

    PubMed

    Guevara, M A; Rizo, L; Ruiz-Díaz, M; Hernández-González, M

    2009-08-01

    This article describes a computer program (HANOIPC3) based on the Tower of Hanoi game that, by analyzing a series of parameters during execution, allows a fast and accurate evaluation of data related to certain executive functions, especially planning, organizing and problem-solving. This computerized version has only one level of difficulty based on the use of 3 disks, but it stipulates an additional rule: only one disk may be moved at a time, and only to an adjacent peg (i.e., no peg can be skipped over). In the original version--without this stipulation--the minimum number of movements required to complete the task is 7, but under the conditions of this computerized version this increases to 26. HANOIPC3 has three important advantages: (1) it allows a researcher or clinician to modify the rules by adding or removing certain conditions, thus augmenting the utility and flexibility in test execution and the interpretation of results; (2) it allows to provide on-line feedback to subjects about their execution; and, (3) it creates a specific file to store the scores that correspond to the parameters obtained during trials. The parameters that can be measured include: latencies (time taken for each movement, measured in seconds), total test time, total number of movements, and the number of correct and incorrect movements. The efficacy and adaptability of this program has been confirmed. PMID:19303660

  2. AUTO-IK: a 2D indicator kriging program for the automated non-parametric modeling of local uncertainty in earth sciences

    PubMed Central

    Goovaerts, P.

    2008-01-01

    Indicator kriging provides a flexible interpolation approach that is well suited for datasets where: 1) many observations are below the detection limit, 2) the histogram is strongly skewed, or 3) specific classes of attribute values are better connected in space than others (e.g. low pollutant concentrations). To apply indicator kriging at its full potential requires, however, the tedious inference and modeling of multiple indicator semivariograms, as well as the post-processing of the results to retrieve attribute estimates and associated measures of uncertainty. This paper presents a computer code that performs automatically the following tasks: selection of thresholds for binary coding of continuous data, computation and modeling of indicator semivariograms, modeling of probability distributions at unmonitored locations (regular or irregular grids), and estimation of the mean and variance of these distributions. The program also offers tools for quantifying the goodness of the model of uncertainty within a cross-validation and jack-knife frameworks. The different functionalities are illustrated using heavy metal concentrations from the well-known soil Jura dataset. A sensitivity analysis demonstrates the benefit of using more thresholds when indicator kriging is implemented with a linear interpolation model, in particular for variables with positively skewed histograms. PMID:20161335

  3. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    PubMed

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    -stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872

  4. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus)

    PubMed Central

    Carr, Steven M.; Duggan, Ana T.; Stenson, Garry B.; Marshall, H. Dawn

    2015-01-01

    -stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed. PMID:26301872

  5. Enhancing functionality and performance in the PVM network computing system

    SciTech Connect

    Sunderam, V.

    1996-09-01

    The research funded by this grant is part of an ongoing research project in heterogeneous distributed computing with the PVM system, at Emory as well as at Oak Ridge Labs and the University of Tennessee. This grant primarily supports research at Emory that continues to evolve new concepts and systems in distributed computing, but it also includes the PI`s ongoing interaction with the other groups in terms of collaborative research as well as software systems development and maintenance. We have continued our second year efforts (July 1995 - June 1996), on the same topics as during the first year, namely (a) visualization of PVM programs to complement XPVM displays; (b) I/O and generalized distributed computing in PVM; and (c) evolution of a multithreaded concurrent computing model. 12 refs.

  6. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  7. Challenges in computational studies of enzyme structure, function and dynamics.

    PubMed

    Carvalho, Alexandra T P; Barrozo, Alexandre; Doron, Dvir; Kilshtain, Alexandra Vardi; Major, Dan Thomas; Kamerlin, Shina Caroline Lynn

    2014-11-01

    In this review we give an overview of the field of Computational enzymology. We start by describing the birth of the field, with emphasis on the work of the 2013 chemistry Nobel Laureates. We then present key features of the state-of-the-art in the field, showing what theory, accompanied by experiments, has taught us so far about enzymes. We also briefly describe computational methods, such as quantum mechanics-molecular mechanics approaches, reaction coordinate treatment, and free energy simulation approaches. We finalize by discussing open questions and challenges. PMID:25306098

  8. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.

    1983-01-01

    Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  9. Computer routines for probability distributions, random numbers, and related functions

    USGS Publications Warehouse

    Kirby, W.H.

    1980-01-01

    Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)

  10. A Functional Analytic Approach to Computer-Interactive Mathematics

    ERIC Educational Resources Information Center

    Ninness, Chris; Rumph, Robin; McCuller, Glen; Harrison, Carol; Ford, Angela M.; Ninness, Sharon K.

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on…

  11. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    NASA Astrophysics Data System (ADS)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  12. Spaceborne computer executive routine functional design specification. Volume 2: Computer executive design for space station/base

    NASA Technical Reports Server (NTRS)

    Kennedy, J. R.; Fitzpatrick, W. S.

    1971-01-01

    The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.

  13. Functional requirements for design of the Space Ultrareliable Modular Computer (SUMC) system simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.; Hornfeck, W. A.

    1972-01-01

    The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.

  14. A high-radix CORDIC architecture dedicated to compute the Gaussian potential function in neural networks

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe H.; Meyer-Baese, Anke; Ramirez, Javier; Garcia, Antonio

    2003-08-01

    In this paper, a new parallel hardware architecture dedicated to compute the Gaussian Potential Function is proposed. This function is commonly utilized in neural radial basis classifiers for pattern recognition as described by Lee; Girosi and Poggio; and Musavi et al. Attention to a simplified Gaussian Potential Function which processes uncorrelated features is confined. Operations of most interest included by the Gaussian potential function are the exponential and the square function. Our hardware computes the exponential function and its exponent at the same time. The contributions of all features to the exponent are computed in parallel. This parallelism reduces computational delay in the output function. The duration does not depend on the number of features processed. Software and hardware case studies are presented to evaluate the new CORDIC.

  15. Using computational models to relate structural and functional brain connectivity

    PubMed Central

    Hlinka, Jaroslav; Coombes, Stephen

    2012-01-01

    Modern imaging methods allow a non-invasive assessment of both structural and functional brain connectivity. This has lead to the identification of disease-related alterations affecting functional connectivity. The mechanism of how such alterations in functional connectivity arise in a structured network of interacting neural populations is as yet poorly understood. Here we use a modeling approach to explore the way in which this can arise and to highlight the important role that local population dynamics can have in shaping emergent spatial functional connectivity patterns. The local dynamics for a neural population is taken to be of the Wilson–Cowan type, whilst the structural connectivity patterns used, describing long-range anatomical connections, cover both realistic scenarios (from the CoComac database) and idealized ones that allow for more detailed theoretical study. We have calculated graph–theoretic measures of functional network topology from numerical simulations of model networks. The effect of the form of local dynamics on the observed network state is quantified by examining the correlation between structural and functional connectivity. We document a profound and systematic dependence of the simulated functional connectivity patterns on the parameters controlling the dynamics. Importantly, we show that a weakly coupled oscillator theory explaining these correlations and their variation across parameter space can be developed. This theoretical development provides a novel way to characterize the mechanisms for the breakdown of functional connectivity in diseases through changes in local dynamics. PMID:22805059

  16. Introduction to Classical Density Functional Theory by a Computational Experiment

    ERIC Educational Resources Information Center

    Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel

    2014-01-01

    We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…

  17. The computational foundations of time dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Whitfield, James

    2014-03-01

    The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. Since a quantum computer can efficiently produce such time-dependent densities, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds. Further, we find that systems do not immediately become non-representable but rather become ill-representable as one approaches this boundary. A representability parameter is defined in our work which quantifies the distance to the boundary of representability and the computational difficulty of finding the Kohn-Sham system.

  18. Computational approaches to identify functional genetic variants in cancer genomes

    PubMed Central

    Gonzalez-Perez, Abel; Mustonen, Ville; Reva, Boris; Ritchie, Graham R.S.; Creixell, Pau; Karchin, Rachel; Vazquez, Miguel; Fink, J. Lynn; Kassahn, Karin S.; Pearson, John V.; Bader, Gary; Boutros, Paul C.; Muthuswamy, Lakshmi; Ouellette, B.F. Francis; Reimand, Jüri; Linding, Rune; Shibata, Tatsuhiro; Valencia, Alfonso; Butler, Adam; Dronov, Serge; Flicek, Paul; Shannon, Nick B.; Carter, Hannah; Ding, Li; Sander, Chris; Stuart, Josh M.; Stein, Lincoln D.; Lopez-Bigas, Nuria

    2014-01-01

    The International Cancer Genome Consortium (ICGC) aims to catalog genomic abnormalities in tumors from 50 different cancer types. Genome sequencing reveals hundreds to thousands of somatic mutations in each tumor, but only a minority drive tumor progression. We present the result of discussions within the ICGC on how to address the challenge of identifying mutations that contribute to oncogenesis, tumor maintenance or response to therapy, and recommend computational techniques to annotate somatic variants and predict their impact on cancer phenotype. PMID:23900255

  19. A brain-computer interface to support functional recovery.

    PubMed

    Kjaer, Troels W; Sørensen, Helge B

    2013-01-01

    Brain-computer interfaces (BCI) register changes in brain activity and utilize this to control computers. The most widely used method is based on registration of electrical signals from the cerebral cortex using extracranially placed electrodes also called electroencephalography (EEG). The features extracted from the EEG may, besides controlling the computer, also be fed back to the patient for instance as visual input. This facilitates a learning process. BCI allow us to utilize brain activity in the rehabilitation of patients after stroke. The activity of the cerebral cortex varies with the type of movement we imagine, and by letting the patient know the type of brain activity best associated with the intended movement the rehabilitation process may be faster and more efficient. The focus of BCI utilization in medicine has changed in recent years. While we previously focused on devices facilitating communication in the rather few patients with locked-in syndrome, much interest is now devoted to the therapeutic use of BCI in rehabilitation. For this latter group of patients, the device is not intended to be a lifelong assistive companion but rather a 'teacher' during the rehabilitation period. PMID:23859968

  20. Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.

    ERIC Educational Resources Information Center

    Snow, Donald R.

    1989-01-01

    Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)

  1. Adaptive, associative, and self-organizing functions in neural computing.

    PubMed

    Kohonen, T

    1987-12-01

    This paper contains an attempt to describe certain adaptive and cooperative functions encountered in neural networks. The approach is a compromise between biological accuracy and mathematical clarity. two types of differential equation seem to describe the basic effects underlying the information of these functions: the equation for the electrical activity of the neuron and the adaptation equation that describes changes in its input connectivities. Various phenomena and operations are derivable from them: clustering of activity in a laterally interconnected nework; adaptive formation of feature detectors; the autoassociative memory function; and self-organized formation of ordered sensory maps. The discussion tends to reason what functions are readily amenable to analytical modeling and which phenomena seem to ensue from the more complex interactions that take place in the brain. PMID:20523469

  2. Multiple multiresolution representation of functions and calculus for fast computation

    SciTech Connect

    Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A

    2010-01-01

    We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.

  3. Evaluation of computing systems using functionals of a Stochastic process

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Wu, L. T.

    1980-01-01

    An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.

  4. Computational strategies for the design of new enzymatic functions.

    PubMed

    Świderek, K; Tuñón, I; Moliner, V; Bertran, J

    2015-09-15

    In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and Retro-Aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies. PMID:25797438

  5. Applications of a new wall function to turbulent flow computations

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1986-01-01

    A new wall function approach is developed based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients. This wall law was derived from a one-dimensional analysis of the turbulent kinetic energy equation with gradient diffusion concept employed in modeling the near-wall shear stress gradient. Numerical testing cases for the present wall functions include turbulent separating flows around an airfoil and turbulent recirculating flows in several confined regions. Improvements on the predictions using the present wall functions are illustrated. For cases of internal recirculating flows, one modification factor for improving the performance of the k-epsilon turbulence model in the flow recirculation regions is also included.

  6. Bread dough rheology: Computing with a damage function model

    NASA Astrophysics Data System (ADS)

    Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong

    2015-01-01

    We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.

  7. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps

    PubMed Central

    2016-01-01

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874

  8. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    SciTech Connect

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott

    2013-01-01

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.

  9. Memory intensive functional architecture for distributed computer control systems

    SciTech Connect

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector.

  10. Frequency domain transfer function identification using the computer program SYSFIT

    SciTech Connect

    Trudnowski, D.J.

    1992-12-01

    Because the primary application of SYSFIT for BPA involves studying power system dynamics, this investigation was geared toward simulating the effects that might be encountered in studying electromechanical oscillations in power systems. Although the intended focus of this work is power system oscillations, the studies are sufficiently genetic that the results can be applied to many types of oscillatory systems with closely-spaced modes. In general, there are two possible ways of solving the optimization problem. One is to use a least-squares optimization function and to write the system in such a form that the problem becomes one of linear least-squares. The solution can then be obtained using a standard least-squares technique. The other method involves using a search method to obtain the optimal model. This method allows considerably more freedom in forming the optimization function and model, but it requires an initial guess of the system parameters. SYSFIT employs this second approach. Detailed investigations were conducted into three main areas: (1) fitting to exact frequency response data of a linear system; (2) fitting to the discrete Fourier transformation of noisy data; and (3) fitting to multi-path systems. The first area consisted of investigating the effects of alternative optimization cost function options; using different optimization search methods; incorrect model order, missing response data; closely-spaced poles; and closely-spaced pole-zero pairs. Within the second area, different noise colorations and levels were studied. In the third area, methods were investigated for improving fitting results by incorporating more than one system path. The following is a list of guidelines and properties developed from the study for fitting a transfer function to the frequency response of a system using optimization search methods.

  11. Computational properties of three-term recurrence relations for Kummer functions

    NASA Astrophysics Data System (ADS)

    Deaño, Alfredo; Segura, Javier; Temme, Nico M.

    2010-01-01

    Several three-term recurrence relations for confluent hypergeometric functions are analyzed from a numerical point of view. Minimal and dominant solutions for complex values of the variable z are given, derived from asymptotic estimates of the Whittaker functions with large parameters. The Laguerre polynomials and the regular Coulomb wave functions are studied as particular cases, with numerical examples of their computation.

  12. A Functional Analytic Approach To Computer-Interactive Mathematics

    PubMed Central

    2005-01-01

    Following a pretest, 11 participants who were naive with regard to various algebraic and trigonometric transformations received an introductory lecture regarding the fundamentals of the rectangular coordinate system. Following the lecture, they took part in a computer-interactive matching-to-sample procedure in which they received training on particular formula-to-formula and formula-to-graph relations as these formulas pertain to reflections and vertical and horizontal shifts. In training A-B, standard formulas served as samples and factored formulas served as comparisons. In training B-C, factored formulas served as samples and graphs served as comparisons. Subsequently, the program assessed for mutually entailed B-A and C-B relations as well as combinatorially entailed C-A and A-C relations. After all participants demonstrated mutual entailment and combinatorial entailment, we employed a test of novel relations to assess 40 different and complex variations of the original training formulas and their respective graphs. Six of 10 participants who completed training demonstrated perfect or near-perfect performance in identifying novel formula-to-graph relations. Three of the 4 participants who made more than three incorrect responses during the assessment of novel relations showed some commonality among their error patterns. Derived transfer of stimulus control using mathematical relations is discussed. PMID:15898471

  13. Toward high-resolution computational design of helical membrane protein structure and function

    PubMed Central

    Barth, Patrick; Senes, Alessandro

    2016-01-01

    The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630

  14. Computer programs for calculation of thermodynamic functions of mixing in crystalline solutions

    NASA Technical Reports Server (NTRS)

    Comella, P. A.; Saxena, S. K.

    1972-01-01

    The computer programs Beta, GEGIM, REGSOL1, REGSOL2, Matrix, and Quasi are presented. The programs are useful in various calculations for the thermodynamic functions of mixing and the activity-composition relations in rock forming minerals.

  15. Functions and Requirements and Specifications for Replacement of the Computer Automated Surveillance System (CASS)

    SciTech Connect

    SCAIEF, C.C.

    1999-12-16

    This functions, requirements and specifications document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of the system to replace the Computer Automated Surveillance System (CASS) alarm monitoring.

  16. Computation of Schenberg response function by using finite element modelling

    NASA Astrophysics Data System (ADS)

    Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.

    2016-05-01

    Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.

  17. Computational complexity of time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Whitfield, J. D.; Yung, M.-H.; Tempel, D. G.; Boixo, S.; Aspuru-Guzik, A.

    2014-08-01

    Time-dependent density functional theory (TDDFT) is rapidly emerging as a premier method for solving dynamical many-body problems in physics and chemistry. The mathematical foundations of TDDFT are established through the formal existence of a fictitious non-interacting system (known as the Kohn-Sham system), which can reproduce the one-electron reduced probability density of the actual system. We build upon these works and show that on the interior of the domain of existence, the Kohn-Sham system can be efficiently obtained given the time-dependent density. We introduce a V-representability parameter which diverges at the boundary of the existence domain and serves to quantify the numerical difficulty of constructing the Kohn-Sham potential. For bounded values of V-representability, we present a polynomial time quantum algorithm to generate the time-dependent Kohn-Sham potential with controllable error bounds.

  18. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  19. Effects of Computer versus Paper Administration of an Adult Functional Writing Assessment

    ERIC Educational Resources Information Center

    Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young

    2011-01-01

    This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…

  20. Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...

  1. A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry

    ERIC Educational Resources Information Center

    Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan

    2013-01-01

    A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…

  2. A Functional Specification for a Programming Language for Computer Aided Learning Applications.

    ERIC Educational Resources Information Center

    National Research Council of Canada, Ottawa (Ontario).

    In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…

  3. Theoretical and computational studies in protein folding, design, and function

    NASA Astrophysics Data System (ADS)

    Morrissey, Michael Patrick

    2000-10-01

    In this work, simplified statistical models are used to understand an array of processes related to protein folding and design. In Part I, lattice models are utilized to test several theories about the statistical properties of protein-like systems. In Part II, sequence analysis and all-atom simulations are used to advance a novel theory for the behavior of a particular protein. Part I is divided into five chapters. In Chapter 2, a method of sequence design for model proteins, based on statistical mechanical first-principles, is developed. The cumulant design method uses a mean-field approximation to expand the free energy of a sequence in temperature. The method successfully designs sequences which fold to a target lattice structure at a specific temperature, a feat which was not possible using previous design methods. The next three chapters are computational studies of the double mutant cycle, which has been used experimentally to predict intra-protein interactions. Complete structure prediction is demonstrated for a model system using exhaustive, and also sub-exhaustive, double mutants. Nonadditivity of enthalpy, rather than of free energy, is proposed and demonstrated to be a superior marker for inter-residue contact. Next, a new double mutant protocol, called exchange mutation, is introduced. Although simple statistical arguments predict exchange mutation to be a more accurate contact predictor than standard mutant cycles, this hypothesis was not upheld in lattice simulations. Reasons for this inconsistency will be discussed. Finally, a multi-chain folding algorithm is introduced. Known as LINKS, this algorithm was developed to test a method of structure prediction which utilizes chain-break mutants. While structure prediction was not successful, LINKS should nevertheless be a useful tool for the study of protein-protein and protein-ligand interactions. The last chapter of Part I utilizes the lattice to explore the differences between standard folding, from

  4. A Computer Program for the Computation of Running Gear Temperatures Using Green's Function

    NASA Technical Reports Server (NTRS)

    Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.

    1996-01-01

    A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.

  5. Software to estimate –33 and –1500 kPa soil water retention using the non-parametric k-Nearest Neighbor technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A computer tool has been developed that uses a k-Nearest Neighbor (k-NN) lazy learning algorithm to estimate soil water retention at –33 and –1500 kPa matric potentials and its uncertainty. The user can customize the provided source data collection to accommodate specific local needs. Ad hoc calcula...

  6. A mesh-decoupled height function method for computing interface curvature

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Desjardins, Olivier

    2015-01-01

    In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.

  7. PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS

    PubMed Central

    Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.

    2013-01-01

    Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390

  8. Functional Competency Development Model for Academic Personnel Based on International Professional Qualification Standards in Computing Field

    ERIC Educational Resources Information Center

    Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon

    2016-01-01

    This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…

  9. Incompressible flow computations based on the vorticity-stream function and velocity-pressure formulations

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Ganjoo, D. K.

    1990-01-01

    Finite element procedures and computations based on the velocity-pressure and vorticity-stream function formulations of incompressible flows are presented. Two new multistep velocity-pressure formulations are proposed and compared with the vorticity-stream function and one-step formulations. The example problems chosen are the standing vortex problem and flow past a circular cylinder. Benchmark quality computations are performed for the cylinder problem. The numerical results indicate that the vorticity-stream function formulation and one of the two new multistep formulations involve much less numerical dissipation than the one-step formulation.

  10. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  11. Functional Specifications for Computer Aided Training Systems Development and Management (CATSDM) Support Functions. Final Report.

    ERIC Educational Resources Information Center

    Hughes, John; And Others

    This report provides a description of a Computer Aided Training System Development and Management (CATSDM) environment based on state-of-the-art hardware and software technology, and including recommendations for off the shelf systems to be utilized as a starting point in addressing the particular systematic training and instruction design and…

  12. The Lung Physiome: merging imaging-based measures with predictive computational models of structure and function

    PubMed Central

    Tawhai, Merryn H; Hoffman, Eric A; Lin, Ching-Long

    2009-01-01

    Global measurements of the lung provided by standard pulmonary function tests do not give insight into the regional basis of lung function and lung disease. Advances in imaging methodologies, computer technologies, and subject-specific simulations are creating new opportunities for studying structure-function relationships in the lung through multi-disciplinary research. The digital Human Lung Atlas is an imaging-based resource compiled from male and female subjects spanning several decades of age. The Atlas comprises both structural and functional measures, and includes computational models derived to match individual subjects for personalized prediction of function. The computational models in the Atlas form part of the Lung Physiome project, which is an international effort to develop integrative models of lung function at all levels of biological organization. The computational models provide mechanistic interpretation of imaging measures; the Atlas provides structural data upon which to base model geometry, and functional data against which to test hypotheses. The example of simulating air flow on a subject-specific basis is considered. Methods for deriving multi-scale models of the airway geometry for individual subjects in the Atlas are outlined, and methods for modeling turbulent flows in the airway are reviewed. PMID:20835982

  13. Peak functions for modeling high resolution soil profile data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Parametric and non-parametric depth functions have been used to estimate continuous soil profile properties. However, some soil properties, such as those seen in weathered loess, have complex peaked and anisotropic depth distributions. These distributions are poorly handled by common parametric func...

  14. Computer/gaming station use in youth: Correlations among use, addiction and functional impairment

    PubMed Central

    Baer, Susan; Saran, Kelly; Green, David A

    2012-01-01

    OBJECTIVE: Computer/gaming station use is ubiquitous in the lives of youth today. Overuse is a concern, but it remains unclear whether problems arise from addictive patterns of use or simply excessive time spent on use. The goal of the present study was to evaluate computer/gaming station use in youth and to examine the relationship between amounts of use, addictive features of use and functional impairment. METHOD: A total of 110 subjects (11 to 17 years of age) from local schools participated. Time spent on television, video gaming and non-gaming recreational computer activities was measured. Addictive features of computer/gaming station use were ascertained, along with emotional/behavioural functioning. Multiple linear regressions were used to understand how youth functioning varied with time of use and addictive features of use. RESULTS: Mean (± SD) total screen time was 4.5±2.4 h/day. Addictive features of use were consistently correlated with functional impairment across multiple measures and informants, whereas time of use, after controlling for addiction, was not. CONCLUSIONS: Youth are spending many hours each day in front of screens. In the absence of addictive features of computer/gaming station use, time spent is not correlated with problems; however, youth with addictive features of use show evidence of poor emotional/ behavioural functioning. PMID:24082802

  15. Analytic computation of energy derivatives - Relationships among partial derivatives of a variationally determined function

    NASA Technical Reports Server (NTRS)

    King, H. F.; Komornicki, A.

    1986-01-01

    Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.

  16. Renormalization group improved computation of correlation functions in theories with nontrivial phase diagram

    NASA Astrophysics Data System (ADS)

    Codello, Alessandro; Tonero, Alberto

    2016-07-01

    We present a simple and consistent way to compute correlation functions in interacting theories with nontrivial phase diagram. As an example we show how to consistently compute the four-point function in three dimensional Z2 -scalar theories. The idea is to perform the path integral by weighting the momentum modes that contribute to it according to their renormalization group (RG) relevance, i.e. we weight each mode according to the value of the running couplings at that scale. In this way, we are able to encode in a loop computation the information regarding the RG trajectory along which we are integrating. We show that depending on the initial condition, or initial point in the phase diagram, we obtain different behaviors of the four-point function at the endpoint of the flow.

  17. Extended Krylov subspaces approximations of matrix functions. Application to computational electromagnetics

    SciTech Connect

    Druskin, V.; Lee, Ping; Knizhnerman, L.

    1996-12-31

    There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.

  18. Performance of computational tools in evaluating the functional impact of laboratory-induced amino acid mutations.

    PubMed

    Gray, Vanessa E; Kukurba, Kimberly R; Kumar, Sudhir

    2012-08-15

    Site-directed mutagenesis is frequently used by scientists to investigate the functional impact of amino acid mutations in the laboratory. Over 10,000 such laboratory-induced mutations have been reported in the UniProt database along with the outcomes of functional assays. Here, we explore the performance of state-of-the-art computational tools (Condel, PolyPhen-2 and SIFT) in correctly annotating the function-altering potential of 10,913 laboratory-induced mutations from 2372 proteins. We find that computational tools are very successful in diagnosing laboratory-induced mutations that elicit significant functional change in the laboratory (up to 92% accuracy). But, these tools consistently fail in correctly annotating laboratory-induced mutations that show no functional impact in the laboratory assays. Therefore, the overall accuracy of computational tools for laboratory-induced mutations is much lower than that observed for the naturally occurring human variants. We tested and rejected the possibilities that the preponderance of changes to alanine and the presence of multiple base-pair mutations in the laboratory were the reasons for the observed discordance between the performance of computational tools for natural and laboratory mutations. Instead, we discover that the laboratory-induced mutations occur predominately at the highly conserved positions in proteins, where the computational tools have the lowest accuracy of correct prediction for variants that do not impact function (neutral). Therefore, the comparisons of experimental-profiling results with those from computational predictions need to be sensitive to the evolutionary conservation of the positions harboring the amino acid change. PMID:22685075

  19. Computation of determinant expansion coefficients within the graphically contracted function method.

    SciTech Connect

    Gidofalvi, G.; Shepard, R.; Chemical Sciences and Engineering Division

    2009-11-30

    Most electronic structure methods express the wavefunction as an expansion of N-electron basis functions that are chosen to be either Slater determinants or configuration state functions. Although the expansion coefficient of a single determinant may be readily computed from configuration state function coefficients for small wavefunction expansions, traditional algorithms are impractical for systems with a large number of electrons and spatial orbitals. In this work, we describe an efficient algorithm for the evaluation of a single determinant expansion coefficient for wavefunctions expanded as a linear combination of graphically contracted functions. Each graphically contracted function has significant multiconfigurational character and depends on a relatively small number of variational parameters called arc factors. Because the graphically contracted function approach expresses the configuration state function coefficients as products of arc factors, a determinant expansion coefficient may be computed recursively more efficiently than with traditional configuration interaction methods. Although the cost of computing determinant coefficients scales exponentially with the number of spatial orbitals for traditional methods, the algorithm presented here exploits two levels of recursion and scales polynomially with system size. Hence, as demonstrated through applications to systems with hundreds of electrons and orbitals, it may readily be applied to very large systems.

  20. Computation of determinant expansion coefficients within the graphically contracted function method.

    PubMed

    Gidofalvi, Gergely; Shepard, Ron

    2009-11-30

    Most electronic structure methods express the wavefunction as an expansion of N-electron basis functions that are chosen to be either Slater determinants or configuration state functions. Although the expansion coefficient of a single determinant may be readily computed from configuration state function coefficients for small wavefunction expansions, traditional algorithms are impractical for systems with a large number of electrons and spatial orbitals. In this work, we describe an efficient algorithm for the evaluation of a single determinant expansion coefficient for wavefunctions expanded as a linear combination of graphically contracted functions. Each graphically contracted function has significant multiconfigurational character and depends on a relatively small number of variational parameters called arc factors. Because the graphically contracted function approach expresses the configuration state function coefficients as products of arc factors, a determinant expansion coefficient may be computed recursively more efficiently than with traditional configuration interaction methods. Although the cost of computing determinant coefficients scales exponentially with the number of spatial orbitals for traditional methods, the algorithm presented here exploits two levels of recursion and scales polynomially with system size. Hence, as demonstrated through applications to systems with hundreds of electrons and orbitals, it may readily be applied to very large systems. PMID:19360796

  1. Use of global functions for improvement in efficiency of nonlinear analysis. [in computer structural displacement estimation

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Stehlin, P.; Brogan, F. A.

    1981-01-01

    A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.

  2. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  3. The Krigifier: A Procedure for Generating Pseudorandom Nonlinear Objective Functions for Computational Experimentation

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.

    1999-01-01

    Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.

  4. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    SciTech Connect

    Ranken, D.; George, J.

    1993-12-31

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  5. Maple (Computer Algebra System) in Teaching Pre-Calculus: Example of Absolute Value Function

    ERIC Educational Resources Information Center

    Tuluk, Güler

    2014-01-01

    Modules in Computer Algebra Systems (CAS) make Mathematics interesting and easy to understand. The present study focused on the implementation of the algebraic, tabular (numerical), and graphical approaches used for the construction of the concept of absolute value function in teaching mathematical content knowledge along with Maple 9. The study…

  6. Effects of a Computer-Based Intervention Program on the Communicative Functions of Children with Autism

    ERIC Educational Resources Information Center

    Hetzroni, Orit E.; Tannous, Juman

    2004-01-01

    This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant…

  7. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  8. PuFT: Computer-Assisted Program for Pulmonary Function Tests.

    ERIC Educational Resources Information Center

    Boyle, Joseph

    1983-01-01

    PuFT computer program (Microsoft Basic) is designed to help in understanding/interpreting pulmonary function tests (PFT). The program provides predicted values for common PFT after entry of patient data, calculates/plots graph simulating force vital capacity (FVC), and allows observations of effects on predicted PFT values and FVC curve when…

  9. Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Gierl, Mark J.; Lai, Hollis; Li, Johnson

    2013-01-01

    The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…

  10. A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function

    ERIC Educational Resources Information Center

    Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.; Blemker, Silvia S.

    2015-01-01

    Purpose: This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method: We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy…

  11. A fast computation method for MUSIC spectrum function based on circular arrays

    NASA Astrophysics Data System (ADS)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  12. A new Fortran 90 program to compute regular and irregular associated Legendre functions

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2010-12-01

    We present a modern Fortran 90 code to compute the regular Plm(x) and irregular Qlm(x) associated Legendre functions for all x∈(-1,+1) (on the cut) and |x|>1 and integer degree ( l) and order ( m). The code applies either forward or backward recursion in ( l) and ( m) in the stable direction, starting with analytically known values for forward recursion and considering both a Wronskian based and a modified Miller's method for backward recursion. While some Fortran 77 codes existed for computing the functions off the cut, no Fortran 90 code was available for accurately computing the functions for all real values of x different from x=±1 where the irregular functions are not defined. Program summaryProgram title: Associated Legendre Functions Catalogue identifier: AEHE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6722 No. of bytes in distributed program, including test data, etc.: 310 210 Distribution format: tar.gz Programming language: Fortran 90 Computer: Linux systems Operating system: Linux RAM: bytes Classification: 4.7 Nature of problem: Compute the regular and irregular associated Legendre functions for integer values of the degree and order and for all real arguments. The computation of the interaction of two electrons, 1/|r-r|, in prolate spheroidal coordinates is used as one example where these functions are required for all values of the argument and we are able to easily compare the series expansion in associated Legendre functions and the exact value. Solution method: The code evaluates the regular and irregular associated Legendre functions using forward recursion when |x|<1 starting the recursion with the analytically known values of the first two members of the sequence. For values of

  13. Computer generation of symbolic network functions - A new theory and implementation.

    NASA Technical Reports Server (NTRS)

    Alderson, G. E.; Lin, P.-M.

    1972-01-01

    A new method is presented for obtaining network functions in which some, none, or all of the network elements are represented by symbolic parameters (i.e., symbolic network functions). Unlike the topological tree enumeration or signal flow graph methods generally used to derive symbolic network functions, the proposed procedure employs fast, efficient, numerical-type algorithms to determine the contribution of those network branches that are not represented by symbolic parameters. A computer program called NAPPE (for Network Analysis Program using Parameter Extractions) and incorporating all of the concepts discussed has been written. Several examples illustrating the usefulness and efficiency of NAPPE are presented.

  14. On computation and use of Fourier coefficients for associated Legendre functions

    NASA Astrophysics Data System (ADS)

    Gruber, Christian; Abrykosov, Oleh

    2016-06-01

    The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.

  15. Radial subsampling for fast cost function computation in intensity-based 3D image registration

    NASA Astrophysics Data System (ADS)

    Boettger, Thomas; Wolf, Ivo; Meinzer, Hans-Peter; Celi, Juan Carlos

    2007-03-01

    Image registration is always a trade-off between accuracy and speed. Looking towards clinical scenarios the time for bringing two or more images into registration should be around a few seconds only. We present a new scheme for subsampling 3D-image data to allow for efficient computation of cost functions in intensity-based image registration. Starting from an arbitrary center point voxels are sampled along scan lines which do radially extend from the center point. We analyzed the characteristics of different cost functions computed on the sub-sampled data and compared them to known cost functions with respect to local optima. Results show the cost functions are smooth and give high peaks at the expected optima. Furthermore we investigated capture range of cost functions computed under the new subsampling scheme. Capture range was remarkably better for the new scheme compared to metrics using all voxels or different subsampling schemes and high registration accuracy was achieved as well. The most important result is the improvement in terms of speed making this scheme very interesting for clinical scenarios. We conclude using the new subsampling scheme intensity-based 3D image registration can be performed much faster than using other approaches while maintaining high accuracy. A variety of different extensions of the new approach is conceivable, e.g. non-regular distribution of the scan lines or not to let the scan lines start from a center point only, but from the surface of an organ model for example.

  16. On computation and use of Fourier coefficients for associated Legendre functions

    NASA Astrophysics Data System (ADS)

    Gruber, Christian; Abrykosov, Oleh

    2016-02-01

    The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5 ] arcmin resolution.

  17. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  18. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Mehra, R. K.

    1974-01-01

    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  19. Can Expanded Bacteriochlorins Act as Photosensitizers in Photodynamic Therapy? Good News from Density Functional Theory Computations.

    PubMed

    Mazzone, Gloria; Alberto, Marta E; De Simone, Bruna C; Marino, Tiziana; Russo, Nino

    2016-01-01

    The main photophysical properties of a series of expanded bacteriochlorins, recently synthetized, have been investigated by means of DFT and TD-DFT methods. Absorption spectra computed with different exchange-correlation functionals, B3LYP, M06 and ωB97XD, have been compared with the experimental ones. In good agreement, all the considered systems show a maximum absorption wavelength that falls in the therapeutic window (600-800 nm). The obtained singlet-triplet energy gaps are large enough to ensure the production of cytotoxic singlet molecular oxygen. The computed spin-orbit matrix elements suggest a good probability of intersystem spin-crossing between singlet and triplet excited states, since they result to be higher than those computed for 5,10,15,20-tetrakis-(m-hydroxyphenyl)chlorin (Foscan©) already used in the photodynamic therapy (PDT) protocol. Because of the investigated properties, these expanded bacteriochlorins can be proposed as PDT agents. PMID:26938516

  20. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  1. Neuromotor recovery from stroke: computational models at central, functional, and muscle synergy level

    PubMed Central

    Casadio, Maura; Tamagnone, Irene; Summa, Susanna; Sanguineti, Vittorio

    2013-01-01

    Computational models of neuromotor recovery after a stroke might help to unveil the underlying physiological mechanisms and might suggest how to make recovery faster and more effective. At least in principle, these models could serve: (i) To provide testable hypotheses on the nature of recovery; (ii) To predict the recovery of individual patients; (iii) To design patient-specific “optimal” therapy, by setting the treatment variables for maximizing the amount of recovery or for achieving a better generalization of the learned abilities across different tasks. Here we review the state of the art of computational models for neuromotor recovery through exercise, and their implications for treatment. We show that to properly account for the computational mechanisms of neuromotor recovery, multiple levels of description need to be taken into account. The review specifically covers models of recovery at central, functional and muscle synergy level. PMID:23986688

  2. Time Utility Functions for Modeling and Evaluating Resource Allocations in a Heterogeneous Computing System

    SciTech Connect

    Briceno, Luis Diego; Khemka, Bhavesh; Siegel, Howard Jay; Maciejewski, Anthony A; Groer, Christopher S; Koenig, Gregory A; Okonski, Gene D; Poole, Stephen W

    2011-01-01

    This study considers a heterogeneous computing system and corresponding workload being investigated by the Extreme Scale Systems Center (ESSC) at Oak Ridge National Laboratory (ORNL). The ESSC is part of a collaborative effort between the Department of Energy (DOE) and the Department of Defense (DoD) to deliver research, tools, software, and technologies that can be integrated, deployed, and used in both DOE and DoD environments. The heterogeneous system and workload described here are representative of a prototypical computing environment being studied as part of this collaboration. Each task can exhibit a time-varying importance or utility to the overall enterprise. In this system, an arriving task has an associated priority and precedence. The priority is used to describe the importance of a task, and precedence is used to describe how soon the task must be executed. These two metrics are combined to create a utility function curve that indicates how valuable it is for the system to complete a task at any given moment. This research focuses on using time-utility functions to generate a metric that can be used to compare the performance of different resource schedulers in a heterogeneous computing system. The contributions of this paper are: (a) a mathematical model of a heterogeneous computing system where tasks arrive dynamically and need to be assigned based on their priority, precedence, utility characteristic class, and task execution type, (b) the use of priority and precedence to generate time-utility functions that describe the value a task has at any given time, (c) the derivation of a metric based on the total utility gained from completing tasks to measure the performance of the computing environment, and (d) a comparison of the performance of resource allocation heuristics in this environment.

  3. Structure, dynamics, and function of the monooxygenase P450 BM-3: insights from computer simulations studies

    NASA Astrophysics Data System (ADS)

    Roccatano, Danilo

    2015-07-01

    The monooxygenase P450 BM-3 is a NADPH-dependent fatty acid hydroxylase enzyme isolated from soil bacterium Bacillus megaterium. As a pivotal member of cytochrome P450 superfamily, it has been intensely studied for the comprehension of structure-dynamics-function relationships in this class of enzymes. In addition, due to its peculiar properties, it is also a promising enzyme for biochemical and biomedical applications. However, despite the efforts, the full understanding of the enzyme structure and dynamics is not yet achieved. Computational studies, particularly molecular dynamics (MD) simulations, have importantly contributed to this endeavor by providing new insights at an atomic level regarding the correlations between structure, dynamics, and function of the protein. This topical review summarizes computational studies based on MD simulations of the cytochrome P450 BM-3 and gives an outlook on future directions.

  4. Coal-seismic, desktop computer programs in BASIC; Part 6, Develop rms velocity functions and apply mute and normal movement

    USGS Publications Warehouse

    Hasbrouck, W.P.

    1983-01-01

    Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.

  5. Broadband transmission functions for atmospheric IR flux computations and climate studies

    NASA Technical Reports Server (NTRS)

    Chou, M.-D.

    1983-01-01

    In order to reduce the size of precomputed tables which are used in the emissivity approach to computing IR radiation in a climate model, the three-dimensional transmission function in the water vapor bands is defined in this study by a simple regression equation consisting of three two-dimensional parameters. The transmittances in the 9.6 and 15 micron bands are individually parameterized as functions of the amount of scaled absorber. This approach can thus be applied to atmospheres with a variable CO2 concentration.

  6. Method, systems, and computer program products for implementing function-parallel network firewall

    DOEpatents

    Fulp, Errin W.; Farley, Ryan J.

    2011-10-11

    Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.

  7. Assessment of cognitive function in alcoholics by computer: a control study.

    PubMed

    Acker, C; Acker, W; Shaw, G K

    1984-01-01

    Results are presented of the performance by 103 alcoholics and 90 controls on six computer-administered tests of cognitive function. The main analysis compared performance of the two groups when pre-existing differences in intellectual capacity, as estimated by NART, were accounted for statistically. The performance of the alcoholics was worse, at a statistically significant level, on 18 of 23 measures. Procedurally, the tests were found to offer practical advantages over conventional procedures. PMID:6508877

  8. A comparison of computational methods and algorithms for the complex gamma function

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1974-01-01

    A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.

  9. Economic probes of mental function and the extraction of computational phenotypes☆

    PubMed Central

    Kishida, Kenneth T.; Montague, P. Read

    2013-01-01

    Economic games are now routinely used to characterize human cognition across multiple dimensions. These games allow for effective computational modeling of mental function because they typically come equipped with notions of optimal play, which provide quantitatively prescribed target functions that can be tracked throughout an experiment. The combination of these games, computational models, and neuroimaging tools open up the possibility for new ways to characterize normal cognition and associated brain function. We propose that these tools may also be used to characterize mental dysfunction, such as that found in a range of psychiatric illnesses. We describe early efforts using a multi-round trust game to probe brain responses associated with healthy social exchange and review how this game has provided a novel and useful characterization of autism spectrum disorder. Lastly, we use the multi-round trust game as an example to discuss how these kinds of games could produce novel bases for representing healthy behavior and brain function and thus provide objectively identifiable subtypes within a broad spectrum of mental function. PMID:24926112

  10. An updated weight of evidence approach to the aquatic hazard assessment of Bisphenol A and the derivation a new predicted no effect concentration (Pnec) using a non-parametric methodology.

    PubMed

    Wright-Walters, Maxine; Volz, Conrad; Talbott, Evelyn; Davis, Devra

    2011-01-15

    An aquatic hazard assessment establishes a derived predicted no effect concentration (PNEC) below which it is assumed that aquatic organisms will not suffer adverse effects from exposure to a chemical. An aquatic hazard assessment of the endocrine disruptor Bisphenol A [BPA; 2, 2-bis (4-hydroxyphenyl) propane] was conducted using a weight of evidence approach, using the ecotoxicological endpoints of survival, growth and development and reproduction. New evidence has emerged that suggests that the aquatic system may not be sufficiently protected from adverse effects of BPA exposure at the current PNEC value of 100 μg/L. It is with this background that; 1) An aquatic hazard assessment for BPA using a weight of evidence approach, was conducted, 2) A PNEC value was derived using a non parametric hazardous concentration for 5% of the specie (HC(5)) approach and, 3) The derived BPA hazard assessment values were compared to aquatic environmental concentrations for BPA to determine, sufficient protectiveness from BPA exposure for aquatic species. A total of 61 studies yielded 94 no observed effect concentration (NOEC) and a toxicity dataset, which suggests that the aquatic effects of mortality, growth and development and reproduction are most likely to occur between the concentrations of 0.0483 μg/L and 2280 μg/L. This finding is within the range for aquatic adverse estrogenic effects reported in the literature. A PNEC of 0.06 μg/L was calculated. The 95% confidence interval was found to be (0.02, 3.40) μg/L. Thus, using the weight of evidence approach based on repeated measurements of these endpoints, the results indicate that currently observed BPA concentrations in surface waters exceed this newly derived PNEC value of 0.06 μg/L. This indicates that some aquatic receptors may be at risk for adverse effects on survival, growth and development and reproduction from BPA exposure at environmentally relevant concentrations. PMID:21130487

  11. Clinical Validation of 4-Dimensional Computed Tomography Ventilation With Pulmonary Function Test Data

    SciTech Connect

    Brennan, Douglas; Schubert, Leah; Diot, Quentin; Castillo, Richard; Castillo, Edward; Guerrero, Thomas; Martel, Mary K.; Linderman, Derek; Gaspar, Laurie E.; Miften, Moyed; Kavanagh, Brian D.; Vinogradskiy, Yevgeniy

    2015-06-01

    Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients with pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT

  12. Computer-Based Cognitive Training for Executive Functions after Stroke: A Systematic Review

    PubMed Central

    van de Ven, Renate M.; Murre, Jaap M. J.; Veltman, Dick J.; Schmand, Ben A.

    2016-01-01

    Background: Stroke commonly results in cognitive impairments in working memory, attention, and executive function, which may be restored with appropriate training programs. Our aim was to systematically review the evidence for computer-based cognitive training of executive dysfunctions. Methods: Studies were included if they concerned adults who had suffered stroke or other types of acquired brain injury, if the intervention was computer training of executive functions, and if the outcome was related to executive functioning. We searched in MEDLINE, PsycINFO, Web of Science, and The Cochrane Library. Study quality was evaluated based on the CONSORT Statement. Treatment effect was evaluated based on differences compared to pre-treatment and/or to a control group. Results: Twenty studies were included. Two were randomized controlled trials that used an active control group. The other studies included multiple baselines, a passive control group, or were uncontrolled. Improvements were observed in tasks similar to the training (near transfer) and in tasks dissimilar to the training (far transfer). However, these effects were not larger in trained than in active control groups. Two studies evaluated neural effects and found changes in both functional and structural connectivity. Most studies suffered from methodological limitations (e.g., lack of an active control group and no adjustment for multiple testing) hampering differentiation of training effects from spontaneous recovery, retest effects, and placebo effects. Conclusions: The positive findings of most studies, including neural changes, warrant continuation of research in this field, but only if its methodological limitations are addressed. PMID:27148007

  13. Using computational fluid dynamics to test functional and ecological hypotheses in fossil taxa

    NASA Astrophysics Data System (ADS)

    Rahman, Imran

    2016-04-01

    Reconstructing how ancient organisms moved and fed is a major focus of study in palaeontology. Traditionally, this has been hampered by a lack of objective data on the functional morphology of extinct species, especially those without a clear modern analogue. However, cutting-edge techniques for characterizing specimens digitally and in three dimensions, coupled with state-of-the-art computer models, now provide a robust framework for testing functional and ecological hypotheses even in problematic fossil taxa. One such approach is computational fluid dynamics (CFD), a method for simulating fluid flows around objects that has primarily been applied to complex engineering-design problems. Here, I will present three case studies of CFD applied to fossil taxa, spanning a range of specimen sizes, taxonomic groups and geological ages. First, I will show how CFD enabled a rigorous test of hypothesized feeding modes in an enigmatic Ediacaran organism with three-fold symmetry, revealing previously unappreciated complexity of pre-Cambrian ecosystems. Second, I will show how CFD was used to evaluate hydrodynamic performance and feeding in Cambrian stem-group echinoderms, shedding light on the probable feeding strategy of the latest common ancestor of all deuterostomes. Third, I will show how CFD allowed us to explore the link between form and function in Mesozoic ichthyosaurs. These case studies serve to demonstrate the enormous potential of CFD for addressing long-standing hypotheses for a variety of fossil taxa, opening up an exciting new avenue in palaeontological studies of functional morphology.

  14. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients. Revised

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2002-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  15. CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients

    NASA Technical Reports Server (NTRS)

    Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.

    2001-01-01

    For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

  16. Simulation of Preterm Neonatal Brain Metabolism During Functional Neuronal Activation Using a Computational Model.

    PubMed

    Hapuarachchi, T; Scholkmann, F; Caldwell, M; Hagmann, C; Kleiser, S; Metz, A J; Pastewski, M; Wolf, M; Tachtsidis, I

    2016-01-01

    We present a computational model of metabolism in the preterm neonatal brain. The model has the capacity to mimic haemodynamic and metabolic changes during functional activation and simulate functional near-infrared spectroscopy (fNIRS) data. As an initial test of the model's efficacy, we simulate data obtained from published studies investigating functional activity in preterm neonates. In addition we simulated recently collected data from preterm neonates during visual activation. The model is well able to predict the haemodynamic and metabolic changes from these observations. In particular, we found that changes in cerebral blood flow and blood pressure may account for the observed variability of the magnitude and sign of stimulus-evoked haemodynamic changes reported in preterm infants. PMID:26782202

  17. Effective electron displacements: A tool for time-dependent density functional theory computational spectroscopy

    SciTech Connect

    Guido, Ciro A. Cortona, Pietro; Adamo, Carlo; Institut Universitaire de France, 103 Bd Saint-Michel, F-75005 Paris

    2014-03-14

    We extend our previous definition of the metric Δr for electronic excitations in the framework of the time-dependent density functional theory [C. A. Guido, P. Cortona, B. Mennucci, and C. Adamo, J. Chem. Theory Comput. 9, 3118 (2013)], by including a measure of the difference of electronic position variances in passing from occupied to virtual orbitals. This new definition, called Γ, permits applications in those situations where the Δr-index is not helpful: transitions in centrosymmetric systems and Rydberg excitations. The Γ-metric is then extended by using the Natural Transition Orbitals, thus providing an intuitive picture of how locally the electron density changes during the electronic transitions. Furthermore, the Γ values give insight about the functional performances in reproducing different type of transitions, and allow one to define a “confidence radius” for GGA and hybrid functionals.

  18. Computing the three-point correlation function of galaxies in O(N^2) time

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2015-12-01

    We present an algorithm that computes the multipole coefficients of the galaxy three-point correlation function (3PCF) without explicitly considering triplets of galaxies. Rather, centring on each galaxy in the survey, it expands the radially binned density field in spherical harmonics and combines these to form the multipoles without ever requiring the relative angle between a pair about the central. This approach scales with number and number density in the same way as the two-point correlation function, allowing run-times that are comparable, and 500 times faster than a naive triplet count. It is exact in angle and easily handles edge correction. We demonstrate the algorithm on the LasDamas SDSS-DR7 mock catalogues, computing an edge corrected 3PCF out to 90 Mpc h-1 in under an hour on modest computing resources. We expect this algorithm will render it possible to obtain the large-scale 3PCF for upcoming surveys such as Euclid, Large Synoptic Survey Telescope (LSST), and Dark Energy Spectroscopic Instrument.

  19. Computation of diffusion function measures in q-space using magnetic resonance hybrid diffusion imaging.

    PubMed

    Wu, Yu-Chien; Field, Aaron S; Alexander, Andrew L

    2008-06-01

    The distribution of water diffusion in biological tissues may be estimated by a 3-D Fourier transform (FT) of diffusion-weighted measurements in q-space. In this study, methods for estimating diffusion spectrum measures (the zero-displacement probability, the mean-squared displacement, and the orientation distribution function) directly from the q-space signals are described. These methods were evaluated using both computer simulations and hybrid diffusion imaging (HYDI) measurements on a human brain. The HYDI method obtains diffusion-weighted measurements on concentric spheres in q-space. Monte Carlo computer simulations were performed to investigate effects of noise, q-space truncation, and sampling interval on the measures. This new direct computation approach reduces HYDI data processing time and image artifacts arising from 3-D FT and regridding interpolation. In addition, it is less sensitive to the noise and q-space truncation effects than conventional approach. Although this study focused on data using the HYDI scheme, this computation approach may be applied to other diffusion sampling schemes including Cartesian diffusion spectrum imaging. PMID:18541492

  20. Choosing a proper exchange-correlation functional for the computational catalysis on surface.

    PubMed

    Teng, Bo-Tao; Wen, Xiao-Dong; Fan, Maohong; Wu, Feng-Min; Zhang, Yulong

    2014-09-14

    To choose a proper functional among the diverse density functional approximations of the electronic exchange-correlation energy for a given system is the basis for obtaining accurate results of theoretical calculations. In this work, we first propose an approach by comparing the calculated ΔE0 with the theoretical reference data based on the corresponding experimental results in a gas phase reaction. With ΔE0 being a criterion, the three most typical and popular exchange-correlation functionals (PW91, PBE and RPBE) were systematically compared in terms of the typical Fischer-Tropsch synthesis reactions in the gas phase. In addition, verifications of the geometrical and electronic properties of modeling catalysts, as well as the adsorption behavior of a typical probe molecule on modeling catalysts are also suggested for further screening of proper functionals. After a systematic comparison of CO adsorption behavior on Co(0001) calculated by PW91, PBE, and RPBE, the RPBE functional was found to be better than the other two in view of FTS reactions in gas phase and CO adsorption behaviors on a cobalt surface. The present work shows the general implications for choosing a reliable exchange-correlation functional in the computational catalysis of a surface. PMID:25072632

  1. Computational Study of Acidic and Basic Functionalized Crystalline Silica Surfaces as a Model for Biomaterial Interfaces.

    PubMed

    Corno, Marta; Delle Piane, Massimo; Monti, Susanna; Moreno-Couranjou, Maryline; Choquet, Patrick; Ugliengo, Piero

    2015-06-16

    In silico modeling of acidic (CH2COOH) or basic (CH2NH2) functionalized silica surfaces has been carried out by means of a density functional approach based on a gradient-corrected functional to provide insight into the characterization of experimentally functionalized surfaces via a plasma method. Hydroxylated surfaces of crystalline cristobalite (sporting 4.8 OH/nm(2)) mimic an amorphous silica interface as unsubstituted material. To functionalize the silica surface we transformed the surface Si-OH groups into Si-CH2COOH and Si-CH2NH2 moieties to represent acidic/basic chemical character for the substitution. Structures, energetics, electronic, and vibrational properties were computed and compared as a function of the increasing loading of the functional groups (from 1 to 4 per surface unit cell). Classical molecular dynamics simulations of selected cases have been performed through a Reax-FF reactive force field to assess the mobility of the surface added chains. Both DFT and force field calculations identify the CH2NH2 moderate surface loading (1 group per unit cell) as the most stable functionalization, at variance with the case of the CH2COOH group, where higher loadings are preferred (2 groups per unit cell). The vibrational fingerprints of the surface functionalities, which are the ν(C═O) stretching and δ(NH2) bending modes for acidic/basic cases, have been characterized as a function of substitution percentage in order to guide the assignment of the experimental data. The final results highlighted the different behavior of the two types of functionalization. On the one hand, the frequency associated with the ν(C═O) mode shifts to lower wavenumbers as a function of the H-bond strength between the surface functionalities (both COOH and SiOH groups), and on the other hand, the δ(NH2) frequency shift seems to be caused by a subtle balance between the H-bond donor and acceptor abilities of the NH2 moiety. Both sets of data are in general agreement with

  2. On fast computation of finite-time coherent sets using radial basis functions

    NASA Astrophysics Data System (ADS)

    Froyland, Gary; Junge, Oliver

    2015-08-01

    Finite-time coherent sets inhibit mixing over finite times. The most expensive part of the transfer operator approach to detecting coherent sets is the construction of the operator itself. We present a numerical method based on radial basis function collocation and apply it to a recent transfer operator construction [G. Froyland, "Dynamic isoperimetry and the geometry of Lagrangian coherent structures," Nonlinearity (unpublished); preprint arXiv:1411.7186] that has been designed specifically for purely advective dynamics. The construction [G. Froyland, "Dynamic isoperimetry and the geometry of Lagrangian coherent structures," Nonlinearity (unpublished); preprint arXiv:1411.7186] is based on a "dynamic" Laplace operator and minimises the boundary size of the coherent sets relative to their volume. The main advantage of our new approach is a substantial reduction in the number of Lagrangian trajectories that need to be computed, leading to large speedups in the transfer operator analysis when this computation is costly.

  3. On fast computation of finite-time coherent sets using radial basis functions.

    PubMed

    Froyland, Gary; Junge, Oliver

    2015-08-01

    Finite-time coherent sets inhibit mixing over finite times. The most expensive part of the transfer operator approach to detecting coherent sets is the construction of the operator itself. We present a numerical method based on radial basis function collocation and apply it to a recent transfer operator construction [G. Froyland, "Dynamic isoperimetry and the geometry of Lagrangian coherent structures," Nonlinearity (unpublished); preprint arXiv:1411.7186] that has been designed specifically for purely advective dynamics. The construction [G. Froyland, "Dynamic isoperimetry and the geometry of Lagrangian coherent structures," Nonlinearity (unpublished); preprint arXiv:1411.7186] is based on a "dynamic" Laplace operator and minimises the boundary size of the coherent sets relative to their volume. The main advantage of our new approach is a substantial reduction in the number of Lagrangian trajectories that need to be computed, leading to large speedups in the transfer operator analysis when this computation is costly. PMID:26328580

  4. Boolean Combinations of Implicit Functions for Model Clipping in Computer-Assisted Surgical Planning

    PubMed Central

    2016-01-01

    This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets’ implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module. PMID:26751685

  5. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  6. Computer-aided analyses of transport protein sequences: gleaning evidence concerning function, structure, biogenesis, and evolution.

    PubMed Central

    Saier, M H

    1994-01-01

    Three-dimensional structures have been elucidated for very few integral membrane proteins. Computer methods can be used as guides for estimation of solute transport protein structure, function, biogenesis, and evolution. In this paper the application of currently available computer programs to over a dozen distinct families of transport proteins is reviewed. The reliability of sequence-based topological and localization analyses and the importance of sequence and residue conservation to structure and function are evaluated. Evidence concerning the nature and frequency of occurrence of domain shuffling, splicing, fusion, deletion, and duplication during evolution of specific transport protein families is also evaluated. Channel proteins are proposed to be functionally related to carriers. It is argued that energy coupling to transport was a late occurrence, superimposed on preexisting mechanisms of solute facilitation. It is shown that several transport protein families have evolved independently of each other, employing different routes, at different times in evolutionary history, to give topologically similar transmembrane protein complexes. The possible significance of this apparent topological convergence is discussed. PMID:8177172

  7. Computing single step operators of logic programming in radial basis function neural networks

    SciTech Connect

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  8. An effective method to verify line and point spread functions measured in computed tomography

    SciTech Connect

    Ohkubo, Masaki; Wada, Sinichi; Matsumoto, Toru; Nishizawa, Kanae

    2006-08-15

    This study describes an effective method for verifying line spread function (LSF) and point spread function (PSF) measured in computed tomography (CT). The CT image of an assumed object function is known to be calculable using LSF or PSF based on a model for the spatial resolution in a linear imaging system. Therefore, the validities of LSF and PSF would be confirmed by comparing the computed images with the images obtained by scanning phantoms corresponding to the object function. Differences between computed and measured images will depend on the accuracy of the LSF and PSF used in the calculations. First, we measured LSF in our scanner, and derived the two-dimensional PSF in the scan plane from the LSF. Second, we scanned the phantom including uniform cylindrical objects parallel to the long axis of a patient's body (z direction). Measured images of such a phantom were characterized according to the spatial resolution in the scan plane, and did not depend on the spatial resolution in the z direction. Third, images were calculated by two-dimensionally convolving the true object as a function of space with the PSF. As a result of comparing computed images with measured ones, good agreement was found and was demonstrated by image subtraction. As a criterion for evaluating quantitatively the overall differences of images, we defined the normalized standard deviation (SD) in the differences between computed and measured images. These normalized SDs were less than 5.0% (ranging from 1.3% to 4.8%) for three types of image reconstruction kernels and for various diameters of cylindrical objects, indicating the high accuracy of PSF and LSF that resulted in successful measurements. Further, we also obtained another LSF utilizing an inappropriate manner, and calculated the images as above. This time, the computed images did not agree with the measured ones. The normalized SDs were 6.0% or more (ranging from 6.0% to 13.8%), indicating the inaccuracy of the PSF and LSF. We

  9. Computer simulation on the cooperation of functional molecules during the early stages of evolution.

    PubMed

    Ma, Wentao; Hu, Jiming

    2012-01-01

    It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with "micro-resolution" at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a "habitat-shift" into bulk water. Second, the protocell stage started with a substage of "pseudo-protocells", with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent "true-protocell" substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the emergence of single

  10. Distinct Quantitative Computed Tomography Emphysema Patterns Are Associated with Physiology and Function in Smokers

    PubMed Central

    San José Estépar, Raúl; Mendoza, Carlos S.; Hersh, Craig P.; Laird, Nan; Crapo, James D.; Lynch, David A.; Silverman, Edwin K.; Washko, George R.

    2013-01-01

    Rationale: Emphysema occurs in distinct pathologic patterns, but little is known about the epidemiologic associations of these patterns. Standard quantitative measures of emphysema from computed tomography (CT) do not distinguish between distinct patterns of parenchymal destruction. Objectives: To study the epidemiologic associations of distinct emphysema patterns with measures of lung-related physiology, function, and health care use in smokers. Methods: Using a local histogram-based assessment of lung density, we quantified distinct patterns of low attenuation in 9,313 smokers in the COPDGene Study. To determine if such patterns provide novel insights into chronic obstructive pulmonary disease epidemiology, we tested for their association with measures of physiology, function, and health care use. Measurements and Main Results: Compared with percentage of low-attenuation area less than −950 Hounsfield units (%LAA-950), local histogram-based measures of distinct CT low-attenuation patterns are more predictive of measures of lung function, dyspnea, quality of life, and health care use. These patterns are strongly associated with a wide array of measures of respiratory physiology and function, and most of these associations remain highly significant (P < 0.005) after adjusting for %LAA-950. In smokers without evidence of chronic obstructive pulmonary disease, the mild centrilobular disease pattern is associated with lower FEV1 and worse functional status (P < 0.005). Conclusions: Measures of distinct CT emphysema patterns provide novel information about the relationship between emphysema and key measures of physiology, physical function, and health care use. Measures of mild emphysema in smokers with preserved lung function can be extracted from CT scans and are significantly associated with functional measures. PMID:23980521

  11. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  12. Computational principles of syntax in the regions specialized for language: integrating theoretical linguistics and functional neuroimaging

    PubMed Central

    Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.

    2013-01-01

    The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957

  13. Talking while Computing in Groups: The Not-so-Private Functions of Computational Private Speech in Mathematical Discussions

    ERIC Educational Resources Information Center

    Zahner, William; Moschkovich, Judit

    2010-01-01

    Students often voice computations during group discussions of mathematics problems. Yet, this type of private speech has received little attention from mathematics educators or researchers. In this article, we use excerpts from middle school students' group mathematical discussions to illustrate and describe "computational private speech." We…

  14. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions

    SciTech Connect

    Brown, James Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  15. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions.

    PubMed

    Brown, James; Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis. PMID:26233104

  16. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions

    NASA Astrophysics Data System (ADS)

    Brown, James; Carrington, Tucker

    2015-07-01

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  17. Computational solution of the defect stream-function equation for nonequilibrium turbulent boundary layers

    NASA Technical Reports Server (NTRS)

    Barnwell, Richard W.

    1993-01-01

    The derivation of the accurate, second-order, almost linear, approximate equation governing the defect stream function for nonequilibrium compressible turbulent boundary layers is reviewed. The similarity of this equation to the heat conduction equation is exploited in the development of an unconditionally stable, tridiagonal computational method which is second-order accurate in the marching direction and fourth-order accurate in the surface-normal direction. Results compare well with experimental data. Nonlinear effects are shown to be small. This two-dimensional method is simple and has been implemented on a programmable calculator.

  18. Functional Priorities, Assistive Technology, and Brain-Computer Interfaces after Spinal Cord Injury

    PubMed Central

    Collinger, Jennifer L.; Boninger, Michael L.; Bruns, Tim M.; Curley, Kenneth; Wang, Wei; Weber, Douglas J.

    2012-01-01

    Spinal cord injury often impacts a person’s ability to perform critical activities of daily living and can have a negative impact on their quality of life. Assistive technology aims to bridge this gap to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies, like brain-computer interfaces (BCIs), are developed. In a survey study of fifty-seven veterans with spinal cord injury who were participating in the National Veterans Wheelchair Games, we found that restoration of bladder/bowel control, walking, and arm/hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of individuals in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would be willing to consider surgery to implant a BCI even though non-invasiveness was a high priority design requirement. This survey demonstrates the interest of individuals with spinal cord injury in receiving and contributing to the design of BCI. PMID:23760996

  19. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    SciTech Connect

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-01-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth.

  20. Distribution of computer functionality for accelerator control at the Brookhaven AGS

    SciTech Connect

    Stevens, A.; Clifford, T.; Frankel, R.

    1985-10-01

    A set of physical and functional system components and their interconnection protocols have been established for all controls work at the AGS. Portions of these designs were tested as part of enhanced operation of the AGS as a source of polarized protons and additional segments will be implemented during the continuing construction efforts which are adding heavy ion capability to our facility. Included in our efforts are the following computer and control system elements: a broad band local area network, which embodies MODEMS; transmission systems and branch interface units; a hierarchical layer, which performs certain data base and watchdog/alarm functions; a group of work station processors (Apollo's) which perform the function of traditional minicomputer host(s) and a layer, which provides both real time control and standardization functions for accelerator devices and instrumentation. Data base and other accelerator functionality is assigned to the most correct level within our network for both real time performance, long-term utility, and orderly growth.

  1. Understanding entangled cerebral networks: a prerequisite for restoring brain function with brain-computer interfaces

    PubMed Central

    Mandonnet, Emmanuel; Duffau, Hugues

    2014-01-01

    Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved. PMID:24834030

  2. Temporal Expression of Peripheral Blood Leukocyte Biomarkers in a Macaca fascicularis Infection Model of Tuberculosis; Comparison with Human Datasets and Analysis with Parametric/Non-parametric Tools for Improved Diagnostic Biomarker Identification

    PubMed Central

    Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham

    2016-01-01

    A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons

  3. Temporal Expression of Peripheral Blood Leukocyte Biomarkers in a Macaca fascicularis Infection Model of Tuberculosis; Comparison with Human Datasets and Analysis with Parametric/Non-parametric Tools for Improved Diagnostic Biomarker Identification.

    PubMed

    Javed, Sajid; Marsay, Leanne; Wareham, Alice; Lewandowski, Kuiama S; Williams, Ann; Dennis, Michael J; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham; Kempsell, Karen E

    2016-01-01

    A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an 'early' FOS-associated response, to a 'late' predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons between

  4. Study of space shuttle orbiter system management computer function. Volume 1: Analysis, baseline design

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A system analysis of the shuttle orbiter baseline system management (SM) computer function is performed. This analysis results in an alternative SM design which is also described. The alternative design exhibits several improvements over the baseline, some of which are increased crew usability, improved flexibility, and improved growth potential. The analysis consists of two parts: an application assessment and an implementation assessment. The former is concerned with the SM user needs and design functional aspects. The latter is concerned with design flexibility, reliability, growth potential, and technical risk. The system analysis is supported by several topical investigations. These include: treatment of false alarms, treatment of off-line items, significant interface parameters, and a design evaluation checklist. An in-depth formulation of techniques, concepts, and guidelines for design of automated performance verification is discussed.

  5. Computing exact p-values for a cross-correlation shotgun proteomics score function.

    PubMed

    Howbert, J Jeffry; Noble, William Stafford

    2014-09-01

    The core of every protein mass spectrometry analysis pipeline is a function that assesses the quality of a match between an observed spectrum and a candidate peptide. We describe a procedure for computing exact p-values for the oldest and still widely used score function, SEQUEST XCorr. The procedure uses dynamic programming to enumerate efficiently the full distribution of scores for all possible peptides whose masses are close to that of the spectrum precursor mass. Ranking identified spectra by p-value rather than XCorr significantly reduces variance because of spectrum-specific effects on the score. In combination with the Percolator postprocessor, the XCorr p-value yields more spectrum and peptide identifications at a fixed false discovery rate than Mascot, X!Tandem, Comet, and MS-GF+ across a variety of data sets. PMID:24895379

  6. Perturbation method to compute 1-D anisotropic P and S receiver functions

    NASA Astrophysics Data System (ADS)

    Çakır, Özcan

    2013-09-01

    We propose a new algorithm to compute the teleseismic P and S receiver function synthetics for a multilayered Cartesian structure with anisotropic flat layers. The algorithm is based on the first-order perturbation theory in which the layered background structure is assumed one-dimensional with isotropic variations in vertical direction. Anisotropic velocity perturbations acting as secondary sources constitute the heterogeneities in the medium. The total wavefield is solved using a convolutional type integral equation along with the Green's function of the one-dimensional reference medium extracted using the reflectivity method. The integral equation involves a five-fold integration in space and wavenumber domains. Four of these integrals are achieved analytically and the fifth integral, which is spatial integral in the vertical direction, is performed numerically for which the Born single scattering approximation greatly suffices. The proposed algorithm is demonstrated on some selected numerical examples adapted from published work in the literature.

  7. Finite difference computation of head-related transfer function for human hearing

    NASA Astrophysics Data System (ADS)

    Xiao, Tian; Huo Liu, Qing

    2003-05-01

    Modeling the head-related transfer function (HRTF) is a key to many applications in spatial audio. To understand and predict the effects of head geometry and the surrounding environment on the HRTF, a three-dimensional finite-difference time domain model (3D FDTD) has been developed to simulate acoustic wave interaction with a human head. A perfectly matched layer (PML) is used to absorb outgoing waves at the truncated boundary of an unbounded medium. An external source is utilized to reduce the computational domain size through the scattered-field/total-field formulation. This numerical model has been validated by analytical solutions for a spherical head model. The 3D FDTD code is then used as a computational tool to predict the HRTF for various scenarios. In particular, a simplified spherical head model is compared to a realistic head model up to about 7 kHz. The HRTF is also computed for a realistic head model in the presence of a wall. It is demonstrated that this 3D FDTD model can be a useful tool for spatial audio applications.

  8. Finite difference computation of head-related transfer function for human hearing.

    PubMed

    Xiao, Tian; Liu, Qing Huo

    2003-05-01

    Modeling the head-related transfer function (HRTF) is a key to many applications in spatial audio. To understand and predict the effects of head geometry and the surrounding environment on the HRTF, a three-dimensional finite-difference time domain model (3D FDTD) has been developed to simulate acoustic wave interaction with a human head. A perfectly matched layer (PML) is used to absorb outgoing waves at the truncated boundary of an unbounded medium. An external source is utilized to reduce the computational domain size through the scattered-field/total-field formulation. This numerical model has been validated by analytical solutions for a spherical head model. The 3D FDTD code is then used as a computational tool to predict the HRTF for various scenarios. In particular, a simplified spherical head model is compared to a realistic head model up to about 7 kHz. The HRTF is also computed for a realistic head model in the presence of a wall. It is demonstrated that this 3D FDTD model can be a useful tool for spatial audio applications. PMID:12765362

  9. Morphological and Functional Evaluation of Quadricuspid Aortic Valves Using Cardiac Computed Tomography

    PubMed Central

    Song, Inyoung; Park, Jung Ah; Choi, Bo Hwa; Shin, Je Kyoun; Chee, Hyun Keun; Kim, Jun Seok

    2016-01-01

    Objective The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). Materials and Methods We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. Results All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Conclusion Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV. PMID:27390538

  10. Poisson Green's function method for increased computational efficiency in numerical calculations of Coulomb coupling elements

    NASA Astrophysics Data System (ADS)

    Zimmermann, Anke; Kuhn, Sandra; Richter, Marten

    2016-01-01

    Often, the calculation of Coulomb coupling elements for quantum dynamical treatments, e.g., in cluster or correlation expansion schemes, requires the evaluation of a six dimensional spatial integral. Therefore, it represents a significant limiting factor in quantum mechanical calculations. If the size or the complexity of the investigated system increases, many coupling elements need to be determined. The resulting computational constraints require an efficient method for a fast numerical calculation of the Coulomb coupling. We present a computational method to reduce the numerical complexity by decreasing the number of spatial integrals for arbitrary geometries. We use a Green's function formulation of the Coulomb coupling and introduce a generalized scalar potential as solution of a generalized Poisson equation with a generalized charge density as the inhomogeneity. That enables a fast calculation of Coulomb coupling elements and, additionally, a straightforward inclusion of boundary conditions and arbitrarily spatially dependent dielectrics through the Coulomb Green's function. Particularly, if many coupling elements are included, the presented method, which is not restricted to specific symmetries of the model, presents a promising approach for increasing the efficiency of numerical calculations of the Coulomb interaction. To demonstrate the wide range of applications, we calculate internanostructure couplings, such as the Förster coupling, and illustrate the inclusion of symmetry considerations in the method for the Coulomb coupling between bound quantum dot states and unbound continuum states.

  11. Acidity of the amidoxime functional group in aqueous solution: a combined experimental and computational study.

    PubMed

    Mehio, Nada; Lashely, Mark A; Nugent, Joseph W; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D; Bryantsev, Vyacheslav S

    2015-02-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pK(a) values that have been reported for the amidoxime functional group. To resolve this existing controversy we investigated the pK(a) values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pK(a) values of representative amidoximes, acetamidoxime, and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pK(a) values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance, with a root-mean-square deviation of 0.46 pK(a) units and 0.45 pK(a) units, respectively. Finally, we employ our two best methods to predict the pK(a) values of promising, uncharacterized amidoxime ligands, which provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents. PMID:25621618

  12. Rsite2: an efficient computational method to predict the functional sites of noncoding RNAs.

    PubMed

    Zeng, Pan; Cui, Qinghua

    2016-01-01

    Noncoding RNAs (ncRNAs) represent a big class of important RNA molecules. Given the large number of ncRNAs, identifying their functional sites is becoming one of the most important topics in the post-genomic era, but available computational methods are limited. For the above purpose, we previously presented a tertiary structure based method, Rsite, which first calculates the distance metrics defined in Methods with the tertiary structure of an ncRNA and then identifies the nucleotides located within the extreme points in the distance curve as the functional sites of the given ncRNA. However, the application of Rsite is largely limited because of limited RNA tertiary structures. Here we present a secondary structure based computational method, Rsite2, based on the observation that the secondary structure based nucleotide distance is strongly positively correlated with that derived from tertiary structure. This makes it reasonable to replace tertiary structure with secondary structure, which is much easier to obtain and process. Moreover, we applied Rsite2 to three ncRNAs (tRNA (Lys), Diels-Alder ribozyme, and RNase P) and a list of human mitochondria transcripts. The results show that Rsite2 works well with nearly equivalent accuracy as Rsite but is much more feasible and efficient. Finally, a web-server, the source codes, and the dataset of Rsite2 are available at http://www.cuialb.cn/rsite2. PMID:26751501

  13. Range, Doppler and astrometric observables computed from Time Transfer Functions: a survey

    NASA Astrophysics Data System (ADS)

    Hees, A.; Bertone, S.; Le Poncin-Lafitte, C.; Teyssandier, P.

    2015-08-01

    Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function-called a reception (or emission) time transfer function has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developed in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant G for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in G when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives.

  14. The Time Transfer Functions: an efficient tool to compute range, Doppler and astrometric observables

    NASA Astrophysics Data System (ADS)

    Hees, A.; Bertone, S.; Le Poncin-Lafitte, C.; Teyssandier, P.

    2015-12-01

    Determining range, Doppler and astrometric observables is of crucial interest for modelling and analyzing space observations. We recall how these observables can be computed when the travel time of a light ray is known as a function of the positions of the emitter and the receiver for a given instant of reception (or emission). For a long time, such a function--called a reception (or emission) time transfer function--has been almost exclusively calculated by integrating the null geodesic equations describing the light rays. However, other methods avoiding such an integration have been considerably developped in the last twelve years. We give a survey of the analytical results obtained with these new methods up to the third order in the gravitational constant G for a mass monopole. We briefly discuss the case of quasi-conjunctions, where higher-order enhanced terms must be taken into account for correctly calculating the effects. We summarize the results obtained at the first order in G when the multipole structure and the motion of an axisymmetric body is taken into account. We present some applications to on-going or future missions like Gaia and Juno. We give a short review of the recent works devoted to the numerical estimates of the time transfer functions and their derivatives.

  15. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    PubMed

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude

  16. Multiscale Theoretical and Computational Modeling of the Synthesis, Structure and Performance of Functional Carbon Materials

    NASA Astrophysics Data System (ADS)

    Mushrif, Samir Hemant

    2010-09-01

    Functional carbon-based/supported materials, including those doped with transition metal, are widely applied in hydrogen mediated catalysis and are currently being designed for hydrogen storage applications. This thesis focuses on acquiring a fundamental understanding and quantitative characterization of: (i) the chemistry of their synthesis procedure, (ii) their microstructure and chemical composition and (iii) their functionality, using multiscale modeling and simulation methodologies. Palladium and palladium(II) acetylacetonate are the transition metal and its precursor of interest, respectively. A first-principles modeling approach consisting of the planewave-pseudopotential implementation of the Kohn-Sham density functional theory, combined with the Car-Parrinello molecular dynamics, is implemented to model the palladium doping step in the synthesis of carbon-based/supported material and its interaction with hydrogen. The electronic structure is analyzed using the electron localization function and, when required, the hydrogen interaction dynamics are accelerated and the energetics are computed using the metadynamics technique. Palladium pseudopotentials are tested and validated for their use in a hydrocarbon environment by successfully computing the experimentally observed crystal structure of palladium(II) acetylacetonate. Long-standing hypotheses related to the palladium doping process are confirmed and new fundamental insights about its molecular chemistry are revealed. The dynamics, mechanism and energy landscape and barriers of hydrogen adsorption and migration on and desorption from the carbon-based/supported palladium clusters are reported for the first time. The effects of palladium doping and of the synthesis procedure on the pore structure of palladium-doped activated carbon fibers are quantified by applying novel statistical mechanical based methods to the experimental physisorption isotherms. The drawbacks of the conventional adsorption-based pore

  17. A Computational Method Designed to Aid in the Teaching of Copolymer Composition and Microstructure as a Function of Conversion.

    ERIC Educational Resources Information Center

    Coleman, M. M.; Varnell, W. D.

    1982-01-01

    Describes a computer program (FORTRAN and APPLESOFT) demonstrating the effect of copolymer composition as a function of conversion, providing theoretical background and examples of types of information gained from computer calculations. Suggests that the program enhances undergraduate students' understanding of basic copolymerization theory.…

  18. Application of the new neutron monitor yield function computed for different altitudes to an analysis of GLEs

    NASA Astrophysics Data System (ADS)

    Mishev, Alexander; Usoskin, Ilya

    2016-07-01

    A precise analysis of SEP (solar energetic particle) spectral and angular characteristics using neutron monitor (NM) data requires realistic modeling of propagation of those particles in the Earth's magnetosphere and atmosphere. On the basis of the method including a sequence of consecutive steps, namely a detailed computation of the SEP assymptotic cones of acceptance, and application of a neutron monitor yield function and convenient optimization procedure, we derived the rigidity spectra and anisotropy characteristics of several major GLEs. Here we present several major GLEs of the solar cycle 23: the Bastille day event on 14 July 2000 (GLE 59), GLE 69 on 20 January 2005, and GLE 70 on 13 December 2006. The SEP spectra and pitch angle distributions were computed in their dynamical development. For the computation we use the newly computed yield function of the standard 6NM64 neutron monitor for primary proton and alpha CR nuclei. In addition, we present new computations of NM yield function for the altitudes of 3000 m and 5000 m above the sea level The computations were carried out with Planetocosmics and CORSIKA codes as standardized Monte-Carlo tools for atmospheric cascade simulations. The flux of secondary neutrons and protons was computed using the Planetocosmics code appliyng a realistic curved atmospheric. Updated information concerning the NM registration efficiency for secondary neutrons and protons was used. The derived results for spectral and angular characteristics using the newly computed NM yield function at several altitudes are compared with the previously obtained ones using the double attenuation method.

  19. Intersections between the Autism Spectrum and the Internet: Perceived Benefits and Preferred Functions of Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted

    2014-01-01

    An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…

  20. Computer Simulations Reveal Multiple Functions for Aromatic Residues in Cellulase Enzymes (Fact Sheet)

    SciTech Connect

    Not Available

    2012-07-01

    NREL researchers use high-performance computing to demonstrate fundamental roles of aromatic residues in cellulase enzyme tunnels. National Renewable Energy Laboratory (NREL) computer simulations of a key industrial enzyme, the Trichoderma reesei Family 6 cellulase (Cel6A), predict that aromatic residues near the enzyme's active site and at the entrance and exit tunnel perform different functions in substrate binding and catalysis, depending on their location in the enzyme. These results suggest that nature employs aromatic-carbohydrate interactions with a wide variety of binding affinities for diverse functions. Outcomes also suggest that protein engineering strategies in which mutations are made around the binding sites may require tailoring specific to the enzyme family. Cellulase enzymes ubiquitously exhibit tunnels or clefts lined with aromatic residues for processing carbohydrate polymers to monomers, but the molecular-level role of these aromatic residues remains unknown. In silico mutation of the aromatic residues near the catalytic site of Cel6A has little impact on the binding affinity, but simulation suggests that these residues play a major role in the glucopyranose ring distortion necessary for cleaving glycosidic bonds to produce fermentable sugars. Removal of aromatic residues at the entrance and exit of the cellulase tunnel, however, dramatically impacts the binding affinity. This suggests that these residues play a role in acquiring cellulose chains from the cellulose crystal and stabilizing the reaction product, respectively. These results illustrate that the role of aromatic-carbohydrate interactions varies dramatically depending on the position in the enzyme tunnel. As aromatic-carbohydrate interactions are present in all carbohydrate-active enzymes, the results have implications for understanding protein structure-function relationships in carbohydrate metabolism and recognition, carbon turnover in nature, and protein engineering strategies for

  1. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    SciTech Connect

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.

  2. Acidity of the amidoxime functional group in aqueous solution. A combined experimental and computational study

    DOE PAGESBeta

    Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.

    2015-01-26

    Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less

  3. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization

  4. [Computer-assisted phonetography as a diagnostic aid in functional dysphonia].

    PubMed

    Airainer, R; Klingholz, F

    1991-07-01

    A total of 160 voice-trained and untrained subjects with functional dysphonia were given a "clinical rating" according to their clinical findings. This was a certain value on a scale that recorded the degree of functional voice disorder ranging from a marked hypofunction to an extreme hyperfunction. The phonetograms of these patients were approximated by ellipses, whereby the definition and quantitative recording of several phonetogram parameters were rendered possible. By means of a linear combination of phonetogram parameters, a "calculated assessment" was obtained for each patient that was expected to tally with the "clinical rating". This paper demonstrates that a graduation of the dysphonic clinical picture with regard to the presence of hypofunctional or hyperfunctional components is possible via computerised phonetogram evaluation. In this case, the "calculated assessments" for both male and female singers and non-singers must be computed using different linear combinations. The method can be introduced as a supplementary diagnostic procedure in the diagnosis of functional dysphonia. PMID:1910366

  5. Computational Multiscale Toxicodynamic Modeling of Silver and Carbon Nanoparticle Effects on Mouse Lung Function

    PubMed Central

    Mukherjee, Dwaipayan; Botelho, Danielle; Gow, Andrew J.; Zhang, Junfeng; Georgopoulos, Panos G.

    2013-01-01

    A computational, multiscale toxicodynamic model has been developed to quantify and predict pulmonary effects due to uptake of engineered nanomaterials (ENMs) in mice. The model consists of a collection of coupled toxicodynamic modules, that were independently developed and tested using information obtained from the literature. The modules were developed to describe the dynamics of tissue with explicit focus on the cells and the surfactant chemicals that regulate the process of breathing, as well as the response of the pulmonary system to xenobiotics. Alveolar type I and type II cells, and alveolar macrophages were included in the model, along with surfactant phospholipids and surfactant proteins, to account for processes occurring at multiple biological scales, coupling cellular and surfactant dynamics affected by nanoparticle exposure, and linking the effects to tissue-level lung function changes. Nanoparticle properties such as size, surface chemistry, and zeta potential were explicitly considered in modeling the interactions of these particles with biological media. The model predictions were compared with in vivo lung function response measurements in mice and analysis of mice lung lavage fluid following exposures to silver and carbon nanoparticles. The predictions were found to follow the trends of observed changes in mouse surfactant composition over 7 days post dosing, and are in good agreement with the observed changes in mouse lung function over the same period of time. PMID:24312506

  6. Novel hold-release functionality in a P300 brain-computer interface

    NASA Astrophysics Data System (ADS)

    Alcaide-Aguirre, R. E.; Huggins, J. E.

    2014-12-01

    Assistive technology control interface theory describes interface activation and interface deactivation as distinct properties of any control interface. Separating control of activation and deactivation allows precise timing of the duration of the activation. Objective. We propose a novel P300 brain-computer interface (BCI) functionality with separate control of the initial activation and the deactivation (hold-release) of a selection. Approach. Using two different layouts and off-line analysis, we tested the accuracy with which subjects could (1) hold their selection and (2) quickly change between selections. Main results. Mean accuracy across all subjects for the hold-release algorithm was 85% with one hold-release classification and 100% with two hold-release classifications. Using a layout designed to lower perceptual errors, accuracy increased to a mean of 90% and the time subjects could hold a selection was 40% longer than with the standard layout. Hold-release functionality provides improved response time (6-16 times faster) over the initial P300 BCI selection by allowing the BCI to make hold-release decisions from very few flashes instead of after multiple sequences of flashes. Significance. For the BCI user, hold-release functionality allows for faster, more continuous control with a P300 BCI, creating new options for BCI applications.

  7. A Computational Framework Discovers New Copy Number Variants with Functional Importance

    PubMed Central

    Banerjee, Samprit; Oldridge, Derek; Poptsova, Maria; Hussain, Wasay M.; Chakravarty, Dimple; Demichelis, Francesca

    2011-01-01

    Structural variants which cause changes in copy numbers constitute an important component of genomic variability. They account for 0.7% of genomic differences in two individual genomes, of which copy number variants (CNVs) are the largest component. A recent population-based CNV study revealed the need of better characterization of CNVs, especially the small ones (<500 bp).We propose a three step computational framework (Identification of germline Changes in Copy Number or IgC2N) to discover and genotype germline CNVs. First, we detect candidate CNV loci by combining information across multiple samples without imposing restrictions to the number of coverage markers or to the variant size. Secondly, we fine tune the detection of rare variants and infer the putative copy number classes for each locus. Last, for each variant we combine the relative distance between consecutive copy number classes with genetic information in a novel attempt to estimate the reference model bias. This computational approach is applied to genome-wide data from 1250 HapMap individuals. Novel variants were discovered and characterized in terms of size, minor allele frequency, type of polymorphism (gains, losses or both), and mechanism of formation. Using data generated for a subset of individuals by a 42 million marker platform, we validated the majority of the variants with the highest validation rate (66.7%) was for variants of size larger than 1 kb. Finally, we queried transcriptomic data from 129 individuals determined by RNA-sequencing as further validation and to assess the functional role of the new variants. We investigated the possible enrichment for variant's regulatory effect and found that smaller variants (<1 Kb) are more likely to regulate gene transcript than larger variants (p-value = 2.04e-08). Our results support the validity of the computational framework to detect novel variants relevant to disease susceptibility studies and provide evidence of the importance of

  8. Computational genomic identification and functional reconstitution of plant natural product biosynthetic pathways.

    PubMed

    Medema, Marnix H; Osbourn, Anne

    2016-08-27

    Covering: 2003 to 2016The last decade has seen the first major discoveries regarding the genomic basis of plant natural product biosynthetic pathways. Four key computationally driven strategies have been developed to identify such pathways, which make use of physical clustering, co-expression, evolutionary co-occurrence and epigenomic co-regulation of the genes involved in producing a plant natural product. Here, we discuss how these approaches can be used for the discovery of plant biosynthetic pathways encoded by both chromosomally clustered and non-clustered genes. Additionally, we will discuss opportunities to prioritize plant gene clusters for experimental characterization, and end with a forward-looking perspective on how synthetic biology technologies will allow effective functional reconstitution of candidate pathways using a variety of genetic systems. PMID:27321668

  9. Computing frequency by using generalized zero-crossing applied to intrinsic mode functions

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2006-01-01

    This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.

  10. Localization of functional adrenal tumors by computed tomography and venous sampling

    SciTech Connect

    Dunnick, N.R.; Doppman, J.L.; Gill, J.R. Jr.; Strott, C.A.; Keiser, H.R.; Brennan, M.F.

    1982-02-01

    Fifty-eight patients with functional lesions of the adrenal glands underwent radiographic evaluation. Twenty-eight patients had primary aldosteronism (Conn syndrome), 20 had Cushing syndrome, and 10 had pheochromocytoma. Computed tomography (CT) correctly identified adrenal tumors in 11 (61%) of 18 patients with aldosteronomas, 6 of 6 patients with benign cortisol-producing adrenal tumors, and 5 (83%) of 6 patients with pheochromocytomas. No false-positive diagnoses were encountered among patients with adrenal adenomas. Bilateral adrenal hyperplasia appeared on CT scans as normal or prominent adrenal glands with a normal configuration; however, CT was not able to exclude the presence of small adenomas. Adrenal venous sampling was correct in each case, and reliably distinguished adrenal tumors from hyperplasia. Recurrent pheochromocytomas were the most difficult to loclize on CT due to the surgical changes in the region of the adrenals and the frequent extra-adrenal locations.

  11. Computational design of intrinsic molecular rectifiers based on asymmetric functionalization of N-phenylbenzamide

    DOE PAGESBeta

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.

    2015-11-03

    Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findingsmore » are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.« less

  12. Computational design of intrinsic molecular rectifiers based on asymmetric functionalization of N-phenylbenzamide

    SciTech Connect

    Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.

    2015-11-03

    Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.

  13. Computed myography: three-dimensional reconstruction of motor functions from surface EMG data

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2008-12-01

    We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.

  14. Functional near-infrared spectroscopy for adaptive human-computer interfaces

    NASA Astrophysics Data System (ADS)

    Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.

    2015-03-01

    We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.

  15. Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function?

    PubMed Central

    Chan, Micaela Y.; Haber, Sara; Drew, Linda M.; Park, Denise C.

    2016-01-01

    Purpose of the Study: Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. Design and Methods: A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interaction, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Results: Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. Implications: iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering relevant technological devices have the added advantage of providing older adults with technological skills useful in facilitating everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. PMID:24928557

  16. Development of computer-aided functions in clinical neurosurgery with PACS

    NASA Astrophysics Data System (ADS)

    Mukasa, Minoru; Aoki, Makoto; Satoh, Minoru; Kowada, Masayoshi; Kikuchi, K.

    1991-07-01

    The introduction of the "Picture Archiving and Communications System (known as PACS)," provides many benefits, including the application of C.A.D., (Computer Aided Diagnosis). Clinically, this allows for the measurement and design of an operation to be easily completed with the CRT monitors of PACS rather than with film, as has been customary in the past. Under the leadership of the Department of Neurosurgery, Akita University School of Medicine, and Southern Tohoku Research Institute for Neuroscience, Koriyama, new computer aided functions with EFPACS (Fuji Electric's PACS) have been developed for use in clinical neurosurgery. This image processing is composed of three parts as follows: (1) Automatic mapping of small lesions depicted on Magnetic Resonance (or MR) images on the brain atlas. (2) Superimposition of two angiographic films onto a single synthesized image. (3) Automatic mapping of the lesion's position (as shown on the. CT images) on the processing image referred to in the foregoing clause 2. The processing in the clause (1) provides a reference for anatomical estimation. The processing in the clause (2) is used for general analysis of the condition of a disease. The processing in the clause (3) is used to design the operation. This image processing is currently being used with good results.

  17. A computational approach to identify genes for functional RNAs in genomic sequences

    PubMed Central

    Carter, Richard J.; Dubchak, Inna; Holbrook, Stephen R.

    2001-01-01

    Currently there is no successful computational approach for identification of genes encoding novel functional RNAs (fRNAs) in genomic sequences. We have developed a machine learning approach using neural networks and support vector machines to extract common features among known RNAs for prediction of new RNA genes in the unannotated regions of prokaryotic and archaeal genomes. The Escherichia coli genome was used for development, but we have applied this method to several other bacterial and archaeal genomes. Networks based on nucleotide composition were 80–90% accurate in jackknife testing experiments for bacteria and 90–99% for hyperthermophilic archaea. We also achieved a significant improvement in accuracy by combining these predictions with those obtained using a second set of parameters consisting of known RNA sequence motifs and the calculated free energy of folding. Several known fRNAs not included in the training datasets were identified as well as several hundred predicted novel RNAs. These studies indicate that there are many unidentified RNAs in simple genomes that can be predicted computationally as a precursor to experimental study. Public access to our RNA gene predictions and an interface for user predictions is available via the web. PMID:11574674

  18. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  19. A computationally efficient double hybrid density functional based on the random phase approximation.

    PubMed

    Grimme, Stefan; Steinmetz, Marc

    2016-08-01

    We present a revised form of a double hybrid density functional (DHDF) dubbed PWRB95. It contains semi-local Perdew-Wang exchange and Becke95 correlation with a fixed amount of 50% non-local Fock exchange. New features are that the robust random phase approximation (RPA) is used to calculate the non-local correlation part instead of a second-order perturbative treatment as in standard DHDF, and the non-self-consistent evaluation of the Fock exchange with KS-orbitals at the GGA level which leads to a significant reduction of the computational effort. To account for London dispersion effects we include the non-local VV10 dispersion functional. Only three empirical scaling parameters were adjusted. The PWRB95 results for extensive standard thermochemical benchmarks (GMTKN30 data base) are compared to those of well-known functionals from the classes of (meta-)GGAs, (meta-)hybrid functionals, and DHDFs, as well as to standard (direct) RPA. The new method is furthermore tested on prototype bond activations with (Ni/Pd)-based transition metal catalysts, and two difficult cases for DHDF, namely the isomerization reaction of the [Cu2(en)2O2](2+) complex and the singlet-triplet energy difference in highly unsaturated cyclacenes. The results show that PWRB95 is almost as accurate as standard DHDF for main-group thermochemistry but has a similar or better performance for non-covalent interactions, more difficult transition metal containing molecules and other electronically problematic cases. Because of its relatively weak basis set dependence, PWRB95 can be applied even in combination with AO basis sets of only triple-zeta quality which yields huge overall computational savings by a factor of about 40 compared to standard DHDF/'quadruple-zeta' calculations. Structure optimizations of small molecules with PWRB95 indicate an accurate description of bond distances superior to that provided by TPSS-D3, PBE0-D3, or other RPA type methods. PMID:26695184

  20. Influence of non-invasive X-ray computed tomography (XRCT) on the microbial community structure and function in soil.

    PubMed

    Fischer, Doreen; Pagenkemper, Sebastian; Nellesen, Jens; Peth, Stephan; Horn, Rainer; Schloter, Michael

    2013-05-01

    In this study the influence of X-ray computed tomography (XRCT) on the microbial community structure and function in soils has been investigated. Our results clearly indicate that XRCT of soil samples has a strong impact on microbial communities and changes structure and function significantly due to the death of selected microbial groups as a result of the treatment. PMID:23499670

  1. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward. PMID:22979846

  2. Feasibility of a Hybrid Brain-Computer Interface for Advanced Functional Electrical Therapy

    PubMed Central

    Savić, Andrej M.; Malešević, Nebojša M.; Popović, Mirjana B.

    2014-01-01

    We present a feasibility study of a novel hybrid brain-computer interface (BCI) system for advanced functional electrical therapy (FET) of grasp. FET procedure is improved with both automated stimulation pattern selection and stimulation triggering. The proposed hybrid BCI comprises the two BCI control signals: steady-state visual evoked potentials (SSVEP) and event-related desynchronization (ERD). The sequence of the two stages, SSVEP-BCI and ERD-BCI, runs in a closed-loop architecture. The first stage, SSVEP-BCI, acts as a selector of electrical stimulation pattern that corresponds to one of the three basic types of grasp: palmar, lateral, or precision. In the second stage, ERD-BCI operates as a brain switch which activates the stimulation pattern selected in the previous stage. The system was tested in 6 healthy subjects who were all able to control the device with accuracy in a range of 0.64–0.96. The results provided the reference data needed for the planned clinical study. This novel BCI may promote further restoration of the impaired motor function by closing the loop between the “will to move” and contingent temporally synchronized sensory feedback. PMID:24616644

  3. Use of time space Green's functions in the computation of transient eddy current fields

    SciTech Connect

    Davey, K.; Turner, L.

    1988-12-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin.

  4. The use of time space Green's functions in the computation of transient eddy current fields

    NASA Astrophysics Data System (ADS)

    Davey, Kent; Turner, Larry

    1988-12-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss-Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin.

  5. Brain-computer interface using a simplified functional near-infrared spectroscopy system.

    PubMed

    Coyle, Shirley M; Ward, Tomás E; Markham, Charles M

    2007-09-01

    A brain-computer interface (BCI) is a device that allows a user to communicate with external devices through thought processes alone. A novel signal acquisition tool for BCIs is near-infrared spectroscopy (NIRS), an optical technique to measure localized cortical brain activity. The benefits of using this non-invasive modality are safety, portability and accessibility. A number of commercial multi-channel NIRS system are available; however we have developed a straightforward custom-built system to investigate the functionality of a fNIRS-BCI system. This work describes the construction of the device, the principles of operation and the implementation of a fNIRS-BCI application, 'Mindswitch' that harnesses motor imagery for control. Analysis is performed online and feedback of performance is presented to the user. Mindswitch presents a basic 'on/off' switching option to the user, where selection of either state takes 1 min. Initial results show that fNIRS can support simple BCI functionality and shows much potential. Although performance may be currently inferior to many EEG systems, there is much scope for development particularly with more sophisticated signal processing and classification techniques. We hope that by presenting fNIRS as an accessible and affordable option, a new avenue of exploration will open within the BCI research community and stimulate further research in fNIRS-BCIs. PMID:17873424

  6. Computational identification of riboswitches based on RNA conserved functional sequences and conformations.

    PubMed

    Chang, Tzu-Hao; Huang, Hsien-Da; Wu, Li-Ching; Yeh, Chi-Ta; Liu, Baw-Jhiune; Horng, Jorng-Tzong

    2009-07-01

    Riboswitches are cis-acting genetic regulatory elements within a specific mRNA that can regulate both transcription and translation by interacting with their corresponding metabolites. Recently, an increasing number of riboswitches have been identified in different species and investigated for their roles in regulatory functions. Both the sequence contexts and structural conformations are important characteristics of riboswitches. None of the previously developed tools, such as covariance models (CMs), Riboswitch finder, and RibEx, provide a web server for efficiently searching homologous instances of known riboswitches or considers two crucial characteristics of each riboswitch, such as the structural conformations and sequence contexts of functional regions. Therefore, we developed a systematic method for identifying 12 kinds of riboswitches. The method is implemented and provided as a web server, RiboSW, to efficiently and conveniently identify riboswitches within messenger RNA sequences. The predictive accuracy of the proposed method is comparable with other previous tools. The efficiency of the proposed method for identifying riboswitches was improved in order to achieve a reasonable computational time required for the prediction, which makes it possible to have an accurate and convenient web server for biologists to obtain the results of their analysis of a given mRNA sequence. RiboSW is now available on the web at http://RiboSW.mbc.nctu.edu.tw/. PMID:19460868

  7. Advancing Understanding and Design of Functional Materials Through Theoretical and Computational Chemical Physics

    SciTech Connect

    Fuentes-Cabrera, Miguel A; Huang, Jingsong; Jakowski, Jacek; Meunier, V.; Lopez-Benzanilla, Alejandro; Cruz Silva, Eduardo; Sumpter, Bobby G; Beste, Ariana

    2012-01-01

    Theoretical and computational chemical physics and materials science offers great opportunity toward helping solve some of the grand challenges in science and engineering, because structure and properties of molecules, solids, and liquids are direct reflections of the underlying quantum motion of their electrons. With the advent of semilocal and especially nonlocal descriptions of exchange and correlation effects, density functional theory (DFT) can now describe bonding in molecules and solids with an accuracy which, for many classes of systems, is sufficient to compare quantitatively to experiments. It is therefore becoming possible to develop a semiquantitative description of a large number of systems and processes. In this chapter, we briefly review DFT and its various extensions to include nonlocal terms that are important for long-range dispersion interactions that dominate many self-assembly processes, molecular surface adsorption processes, solution processes, and biological and polymeric materials. Applications of DFT toward problems relevant to energy systems, including energy storage materials, functional nanoelectronics/optoelectronics, and energy conversion, are highlighted.

  8. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This

  9. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    USGS Publications Warehouse

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  10. Indices of cognitive function measured in rugby union players using a computer-based test battery.

    PubMed

    MacDonald, Luke A; Minahan, Clare L

    2016-09-01

    The purpose of this study was to investigate the intra- and inter-day reliability of cognitive performance using a computer-based test battery in team-sport athletes. Eighteen elite male rugby union players (age: 19 ± 0.5 years) performed three experimental trials (T1, T2 and T3) of the test battery: T1 and T2 on the same day and T3, on the following day, 24 h later. The test battery comprised of four cognitive tests assessing the cognitive domains of executive function (Groton Maze Learning Task), psychomotor function (Detection Task), vigilance (Identification Task), visual learning and memory (One Card Learning Task). The intraclass correlation coefficients (ICCs) for the Detection Task, the Identification Task and the One Card Learning Task performance variables ranged from 0.75 to 0.92 when comparing T1 to T2 to assess intraday reliability, and 0.76 to 0.83 when comparing T1 and T3 to assess inter-day reliability. The ICCs for the Groton Maze Learning Task intra- and inter-day reliability were 0.67 and 0.57, respectively. We concluded that the Detection Task, the Identification Task and the One Card Learning Task are reliable measures of psychomotor function, vigilance, visual learning and memory in rugby union players. The reliability of the Groton Maze Learning Task is questionable (mean coefficient of variation (CV) = 19.4%) and, therefore, results should be interpreted with caution. PMID:26756946

  11. Progressive adaptation in regional parenchyma mechanics following extensive lung resection assessed by functional computed tomography

    PubMed Central

    Yilmaz, Cuneyt; Tustison, Nicholas J.; Dane, D. Merrill; Ravikumar, Priya; Takahashi, Masaya; Gee, James C.

    2011-01-01

    In adult canines following major lung resection, the remaining lobes expand asymmetrically, associated with alveolar tissue regrowth, remodeling, and progressive functional compensation over many months. To permit noninvasive longitudinal assessment of regional growth and function, we performed serial high-resolution computed tomography (HRCT) on six male dogs (∼9 mo old, 25.0 ± 4.5 kg, ±SD) at 15 and 30 cmH2O transpulmonary pressure (Ptp) before resection (PRE) and 3 and 15 mo postresection (POST3 and POST15, respectively) of 65–70% of lung units. At POST3, lobar air volume increased 83–148% and tissue (including microvascular blood) volume 120–234% above PRE values without further changes at POST15. Lobar-specific compliance (Cs) increased 52–137% from PRE to POST3 and 28–79% from POST3 to POST15. Inflation-related parenchyma strain and shear were estimated by detailed registration of corresponding anatomical features at each Ptp. Within each lobe, regional displacement was most pronounced at the caudal region, whereas strain was pronounced in the periphery. Regional three-dimensional strain magnitudes increased heterogeneously from PRE to POST3, with further medial-lateral increases from POST3 to POST15. Lobar principal strains (PSs) were unchanged or modestly elevated postresection; changes in lobar maximum PS correlated inversely with changes in lobar air and tissue volumes. Lobar shear distortion increased in coronal and transverse planes at POST3 without further changes thereafter. These results establish a novel use of functional HRCT to map heterogeneous regional deformation during compensatory lung growth and illustrate a stimulus-response feedback loop whereby postresection mechanical stress initiates differential lobar regrowth and sustained remodeling, which in turn, relieves parenchyma stress and strain, resulting in progressive increases in lobar Cs and a delayed increase in whole lung Cs. PMID:21799134

  12. COPD phenotypes on computed tomography and its correlation with selected lung function variables in severe patients

    PubMed Central

    da Silva, Silvia Maria Doria; Paschoal, Ilma Aparecida; De Capitani, Eduardo Mello; Moreira, Marcos Mello; Palhares, Luciana Campanatti; Pereira, Mônica Corso

    2016-01-01

    Background Computed tomography (CT) phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD) patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC) uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2), Slope of phase 2 (Slp2), and Slope of phase 3 (Slp3) of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath. Objective To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables. Subjects and methods Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC). The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP) or airway disease (AWD) phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables. Results Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD groups, but EMP patients had signicantly lower body mass index, worse obstruction, and shorter walked distance on 6MWT. Concerning VC, EMP patients had signicantly lower ETCO2, Slp2 and Slp3. Increases in Slp3 characterize heterogeneous involvement of the distal air spaces, as in AWD. Conclusion Visual assessment and

  13. Computer simulation of respiratory impedance and flow transfer functions during high frequency oscillations.

    PubMed

    Peslin, R

    1989-01-01

    The usefulness of measuring respiratory flow in the airway and at the chest wall and of measuring respiratory input impedance (Z) to monitor high frequency ventilation was investigated by computer simulation using a monoalveolar 10-coefficient model. The latter included a central airway with its resistance (Rc) and inertance (lc), a resistive peripheral airway (Rp), a lumped bronchial compliance (Cb), alveolar gas compliance (Cgas), lung tissue with its resistance (RL) and compliance (CL), and chest wall resistance (RW), inertance (lw) and compliance (Cw). Gas flow in the peripheral airway (Vp), shunt flow through Cb (Vb), gas compression flow (Vgas) and rate of volume change of the lung (VL) and of the chest (VW) were computed and expressed as a function of gas flow in the central airway (Vc). For normal values of the coefficients, Vp/Vc was found to decrease moderately with increasing frequency and was still 0.75 at 20 Hz. Peripheral airway obstruction (Rp x 5) considerably decreased Vp/Vc, particularly at high frequency. It did not change the relationship between the two measurable flows, Vc and Vw, but increased the effective resistance at low frequency and shifted the reactance curve to the right. A reduced lung or chest wall compliance produced little change in Vp/Vc and Z except at very low frequencies; however, it decreased the phase lag between Vw and Vc. Finally, an increased airway wall compliance decreased Vp/Vc, but had little effect on Z and Vw/Vc. It is concluded that measuring respiratory impedance may help in detecting some, but not all of the conditions in which peripheral flow convection is decreased during high frequency oscillations. PMID:2611083

  14. Density functional theory computation of Nuclear Magnetic Resonance parameters in light and heavy nuclei

    NASA Astrophysics Data System (ADS)

    Sutter, Kiplangat

    This thesis illustrates the utilization of Density functional theory (DFT) in calculations of gas and solution phase Nuclear Magnetic Resonance (NMR) properties of light and heavy nuclei. Computing NMR properties is still a challenge and there are many unknown factors that are still being explored. For instance, influence of hydrogen-bonding; thermal motion; vibration; rotation and solvent effects. In one of the theoretical studies of 195Pt NMR chemical shift in cisplatin and its derivatives illustrated in Chapter 2 and 3 of this thesis. The importance of representing explicit solvent molecules explicitly around the Pt center in cisplatin complexes was outlined. In the same complexes, solvent effect contributed about half of the J(Pt-N) coupling constant. Indicating the significance of considering the surrounding solvent molecules in elucidating the NMR measurements of cisplatin binding to DNA. In chapter 4, we explore the Spin-Orbit (SO) effects on the 29Si and 13C chemical shifts induced by surrounding metal and ligands. The unusual Ni, Pd, Pt trends in SO effects to the 29Si in metallasilatrane complexes X-Si-(mu-mt)4-M-Y was interpreted based on electronic and relativistic effects rather than by structural differences between the complexes. In addition, we develop a non-linear model for predicting NMR SO effects in a series of organics bonded to heavy nuclei halides. In chapter 5, we extend the idea of "Chemist's orbitals" LMO analysis to the quantum chemical proton NMR computation of systems with internal resonance-assisted hydrogen bonds. Consequently, we explicitly link the relationship between the NMR parameters related to H-bonded systems and intuitive picture of a chemical bond from quantum calculations. The analysis shows how NMR signatures characteristic of H-bond can be explained by local bonding and electron delocalization concepts. One shortcoming of some of the anti-cancer agents like cisplatin is that they are toxic and researchers are looking for

  15. Response functions for computing absorbed dose to skeletal tissues from neutron irradiation.

    PubMed

    Bahadori, Amir A; Johnson, Perry; Jokisch, Derek W; Eckerman, Keith F; Bolch, Wesley E

    2011-11-01

    Spongiosa in the adult human skeleton consists of three tissues-active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM(50)), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM(50) targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM(50) and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM(50) DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me

  16. Response functions for computing absorbed dose to skeletal tissues from neutron irradiation

    NASA Astrophysics Data System (ADS)

    Bahadori, Amir A.; Johnson, Perry; Jokisch, Derek W.; Eckerman, Keith F.; Bolch, Wesley E.

    2011-11-01

    Spongiosa in the adult human skeleton consists of three tissues—active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM50), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM50 targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM50 and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM50 DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me

  17. How to teach mono-unary algebras and functional graphs with the use of computers in secondary schools

    NASA Astrophysics Data System (ADS)

    Binterová, Helena; Fuchs, Eduard

    2014-07-01

    In this paper, alternative descriptions of functions are demonstrated with the use of a computer. If we understand functions as mono-unary algebraic functions or functional graphs, it is possible, even at the school level, to suitably present many of their characteristics. First, we describe cyclic graphs of constant and linear functions, which are a part of the upper-secondary level educational curriculum. Students are usually surprised by the unexpected characteristics of such simple functions which cannot be revealed using the traditional Cartesian graphing. The next part of the paper deals with the characteristics of functional graphs of quadratic functions, which play an important role in school mathematics and in applications, for instance, in the description of non-linear processes. We show that their description is much more complicated. In contrast to functional graphs of linear functions, it is necessary to use computers. Students can find space for their own individual exploration to reveal lines of interesting characteristics of quadratic functions, which give students a new view on this part of school mathematics.

  18. Functional analysis of metabolic channeling and regulation in lignin biosynthesis: a computational approach.

    PubMed

    Lee, Yun; Escamilla-Treviño, Luis; Dixon, Richard A; Voit, Eberhard O

    2012-01-01

    Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G) and syringyl (S) lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid. PMID:23144605

  19. Functional Analysis of Metabolic Channeling and Regulation in Lignin Biosynthesis: A Computational Approach

    PubMed Central

    Lee, Yun; Escamilla-Treviño, Luis; Dixon, Richard A.; Voit, Eberhard O.

    2012-01-01

    Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G) and syringyl (S) lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid. PMID:23144605

  20. Insights into the function of ion channels by computational electrophysiology simulations.

    PubMed

    Kutzner, Carsten; Köpfer, David A; Machtens, Jan-Philipp; de Groot, Bert L; Song, Chen; Zachariae, Ulrich

    2016-07-01

    Ion channels are of universal importance for all cell types and play key roles in cellular physiology and pathology. Increased insight into their functional mechanisms is crucial to enable drug design on this important class of membrane proteins, and to enhance our understanding of some of the fundamental features of cells. This review presents the concepts behind the recently developed simulation protocol Computational Electrophysiology (CompEL), which facilitates the atomistic simulation of ion channels in action. In addition, the review provides guidelines for its application in conjunction with the molecular dynamics software package GROMACS. We first lay out the rationale for designing CompEL as a method that models the driving force for ion permeation through channels the way it is established in cells, i.e., by electrochemical ion gradients across the membrane. This is followed by an outline of its implementation and a description of key settings and parameters helpful to users wishing to set up and conduct such simulations. In recent years, key mechanistic and biophysical insights have been obtained by employing the CompEL protocol to address a wide range of questions on ion channels and permeation. We summarize these recent findings on membrane proteins, which span a spectrum from highly ion-selective, narrow channels to wide diffusion pores. Finally we discuss the future potential of CompEL in light of its limitations and strengths. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26874204

  1. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    DOE PAGESBeta

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; et al

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less

  2. Computed versus measured ion velocity distribution functions in a Hall effect thruster

    SciTech Connect

    Garrigues, L.; Mazouffre, S.; Bourgeois, G.

    2012-06-01

    We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion mean velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.

  3. Effects of a computer-based intervention program on the communicative functions of children with autism.

    PubMed

    Hetzroni, Orit E; Tannous, Juman

    2004-04-01

    This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant speech, and communicative initiations. Multiple-baseline design across settings was used to examine the effects of the exposure of five children with autism to activities in a structured and controlled simulated environment on the communication manifested in their natural environment. Results indicated that after exposure to the simulations, all children produced fewer sentences with delayed and irrelevant speech. Most of the children engaged in fewer sentences involving immediate echolalia and increased the number of communication intentions and the amount of relevant speech they produced. Results indicated that after practicing in a controlled and structured setting that provided the children with opportunities to interact in play, food, and hygiene activities, the children were able to transfer their knowledge to the natural classroom environment. Implications and future research directions are discussed. PMID:15162930

  4. Synchrotron-based dynamic computed tomography of tissue motion for regional lung function measurement

    PubMed Central

    Dubsky, Stephen; Hooper, Stuart B.; Siu, Karen K. W.; Fouras, Andreas

    2012-01-01

    During breathing, lung inflation is a dynamic process involving a balance of mechanical factors, including trans-pulmonary pressure gradients, tissue compliance and airway resistance. Current techniques lack the capacity for dynamic measurement of ventilation in vivo at sufficient spatial and temporal resolution to allow the spatio-temporal patterns of ventilation to be precisely defined. As a result, little is known of the regional dynamics of lung inflation, in either health or disease. Using fast synchrotron-based imaging (up to 60 frames s−1), we have combined dynamic computed tomography (CT) with cross-correlation velocimetry to measure regional time constants and expansion within the mammalian lung in vivo. Additionally, our new technique provides estimation of the airflow distribution throughout the bronchial tree during the ventilation cycle. Measurements of lung expansion and airflow in mice and rabbit pups are shown to agree with independent measures. The ability to measure lung function at a regional level will provide invaluable information for studies into normal and pathological lung dynamics, and may provide new pathways for diagnosis of regional lung diseases. Although proof-of-concept data were acquired on a synchrotron, the methodology developed potentially lends itself to clinical CT scanning and therefore offers translational research opportunities. PMID:22491972

  5. Utility functions and resource management in an oversubscribed heterogeneous computing environment

    SciTech Connect

    Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; Rambharos, R.

    2014-09-26

    We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.

  6. A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function

    PubMed Central

    Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.

    2015-01-01

    Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120

  7. Practical Steps toward Computational Unification: Helpful Perspectives for New Systems, Adding Functionality to Existing Ones

    NASA Astrophysics Data System (ADS)

    Troy, R. M.

    2005-12-01

    and functions may be integrated into a system efficiently, with minimal effort, and with an eye toward an eventual Computational Unification of the Earth Sciences. A fundamental to such systems is meta-data which describe not only the content of data but also how intricate relationships are represented and used to good advantage. Retrieval techniques will be discussed including trade-offs in using externally managed meta-data versus embedded meta-data, how the two may be integrated, and how "simplifying assumptions" may or may not actually be helpful. The perspectives presented in this talk or poster session are based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, which sought to unify NASA's Mission To Planet Earth's EOS-DIS, and on-going experience developed by Science Tools corporation, of which the author is a principal. NOTE: These ideas are most easily shared in the form of a talk, and we suspect that this session will generate a lot of interest. We would therefore prefer to have this session accepted as a talk as opposed to a poster session.

  8. Exploring the cognitive and motor functions of the basal ganglia: an integrative review of computational cognitive neuroscience models

    PubMed Central

    Helie, Sebastien; Chakravarthy, Srinivasa; Moustafa, Ahmed A.

    2013-01-01

    Many computational models of the basal ganglia (BG) have been proposed over the past twenty-five years. While computational neuroscience models have focused on closely matching the neurobiology of the BG, computational cognitive neuroscience (CCN) models have focused on how the BG can be used to implement cognitive and motor functions. This review article focuses on CCN models of the BG and how they use the neuroanatomy of the BG to account for cognitive and motor functions such as categorization, instrumental conditioning, probabilistic learning, working memory, sequence learning, automaticity, reaching, handwriting, and eye saccades. A total of 19 BG models accounting for one or more of these functions are reviewed and compared. The review concludes with a discussion of the limitations of existing CCN models of the BG and prescriptions for future modeling, including the need for computational models of the BG that can simultaneously account for cognitive and motor functions, and the need for a more complete specification of the role of the BG in behavioral functions. PMID:24367325

  9. Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument

    PubMed Central

    Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.

    2012-01-01

    Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve

  10. GammaCHI: A package for the inversion and computation of the gamma and chi-square cumulative distribution functions (central and noncentral)

    NASA Astrophysics Data System (ADS)

    Gil, Amparo; Segura, Javier; Temme, Nico M.

    2015-06-01

    A Fortran 90 module GammaCHI for computing and inverting the gamma and chi-square cumulative distribution functions (central and noncentral) is presented. The main novelty of this package is the reliable and accurate inversion routines for the noncentral cumulative distribution functions. Additionally, the package also provides routines for computing the gamma function, the error function and other functions related to the gamma function. The module includes the routines cdfgamC, invcdfgamC, cdfgamNC, invcdfgamNC, errorfunction, inverfc, gamma, loggam, gamstar and quotgamm for the computation of the central gamma distribution function (and its complementary function), the inversion of the central gamma distribution function, the computation of the noncentral gamma distribution function (and its complementary function), the inversion of the noncentral gamma distribution function, the computation of the error function and its complementary function, the inversion of the complementary error function, the computation of: the gamma function, the logarithm of the gamma function, the regulated gamma function and the ratio of two gamma functions, respectively.

  11. Evaluation of Coupled Perturbed and Density Functional Methods of Computing the Parity-Violating Energy Difference between Enantiomers

    NASA Astrophysics Data System (ADS)

    MacDermott, A. J.; Hyde, G. O.; Cohen, A. J.

    2009-03-01

    We present new coupled-perturbed Hartree-Fock (CPHF) and density functional theory (DFT) computations of the parity-violating energy difference (PVED) between enantiomers for H2O2 and H2S2. Our DFT PVED computations are the first for H2S2 and the first with the new HCTH and OLYP functionals. Like other “second generation” PVED computations, our results are an order of magnitude larger than the original “first generation” uncoupled-perturbed Hartree-Fock computations of Mason and Tranter. We offer an explanation for the dramatically larger size in terms of cancellation of contributions of opposing signs, which also explains the basis set sensitivity of the PVED, and its conformational hypersensitivity (addressed in the following paper). This paper also serves as a review of the different types of “second generation” PVED computations: we set our work in context, comparing our results with those of four other groups, and noting the good agreement between results obtained by very different methods. DFT PVEDs tend to be somewhat inflated compared to the CPHF values, but this is not a problem when only sign and order of magnitude are required. Our results with the new OLYP functional are less inflated than those with other functionals, and OLYP is also more efficient computationally. We therefore conclude that DFT computation offers a promising approach for low-cost extension to larger biosystems, especially polymers. The following two papers extend to terrestrial and extra-terrestrial amino acids respectively, and later work will extend to polymers.

  12. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  13. Sensory processing during viewing of cinematographic material: computational modeling and functional neuroimaging.

    PubMed

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-02-15

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified "sensory" networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom-up signals on brain activity

  14. Supervised learning with decision tree-based methods in computational and systems biology.

    PubMed

    Geurts, Pierre; Irrthum, Alexandre; Wehenkel, Louis

    2009-12-01

    At the intersection between artificial intelligence and statistics, supervised learning allows algorithms to automatically build predictive models from just observations of a system. During the last twenty years, supervised learning has been a tool of choice to analyze the always increasing and complexifying data generated in the context of molecular biology, with successful applications in genome annotation, function prediction, or biomarker discovery. Among supervised learning methods, decision tree-based methods stand out as non parametric methods that have the unique feature of combining interpretability, efficiency, and, when used in ensembles of trees, excellent accuracy. The goal of this paper is to provide an accessible and comprehensive introduction to this class of methods. The first part of the review is devoted to an intuitive but complete description of decision tree-based methods and a discussion of their strengths and limitations with respect to other supervised learning methods. The second part of the review provides a survey of their applications in the context of computational and systems biology. PMID:20023720

  15. Computing alignment and orientation of non-linear molecules at room temperatures using random phase wave functions

    NASA Astrophysics Data System (ADS)

    Kallush, Shimshon; Fleischer, Sharly; Ultrafast terahertz molecular dynamics Collaboration

    2015-05-01

    Quantum simulation of large open systems is a hard task that demands huge computation and memory costs. The rotational dynamics of non-linear molecules at high-temperature under external fields is such an example. At room temperature, the initial density matrix populates ~ 104 rotational states, and the whole coupled Hilbert space can reach ~ 106 states. Simulation by neither the direct density matrix nor the full basis set of populated wavefunctions is impossible. We employ the random phase wave function method to represent the initial state and compute several time dependent and independent observables such as the orientation and the alignment of the molecules. The error of the method was found to scale as N- 1 / 2, where N is the number of wave function realizations employed. Scaling vs. the temperature was computed for weak and strong fields. As expected, the convergence of the method increase rapidly with the temperature and the field intensity.

  16. INTERP3: A computer routine for linear interpolation of trivariate functions defined by nondistinct unequally spaced variables

    NASA Technical Reports Server (NTRS)

    Hill, D. C.; Morris, S. J., Jr.

    1979-01-01

    A report on the computer routine INTERP3 is presented. The routine is designed to linearly interpolate a variable which is a function of three independent variables. The variables within the parameter arrays do not have to be distinct, or equally spaced, and the array variables can be in increasing or decreasing order.

  17. Investigating the Potential of Computer Environments for the Teaching and Learning of Functions: A Double Analysis from Two Research Traditions

    ERIC Educational Resources Information Center

    Lagrange, Jean-Baptiste; Psycharis, Giorgos

    2014-01-01

    The general goal of this paper is to explore the potential of computer environments for the teaching and learning of functions. To address this, different theoretical frameworks and corresponding research traditions are available. In this study, we aim to network different frameworks by following a "double analysis" method to analyse two…

  18. Utilization of high resolution computed tomography to visualize the three dimensional structure and function of plant vasculature

    Technology Transfer Automated Retrieval System (TEKTRAN)

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...

  19. Density Functional Computations and Mass Spectrometric Measurements. Can this Coupling Enlarge the Knowledge of Gas-Phase Chemistry?

    NASA Astrophysics Data System (ADS)

    Marino, T.; Russo, N.; Sicilia, E.; Toscano, M.; Mineva, T.

    A series of gas-phase properties of the systems has been investigated by using different exchange-correlation potentials and basis sets of increasing size in the framework of Density Functional theory with the aim to determine a strategy able to give reliable results with reasonable computational efforts.

  20. Content Range and Precision of a Computer Adaptive Test of Upper Extremity Function for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.

    2011-01-01

    This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…

  1. Implementation of the AES as a Hash Function for Confirming the Identity of Software on a Computer System

    SciTech Connect

    Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.; Mileson, Nicholas D.

    2003-01-20

    This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5), and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.

  2. On One Unusual Method of Computation of Limits of Rational Functions in the Program Mathematica[R

    ERIC Educational Resources Information Center

    Hora, Jaroslav; Pech, Pavel

    2005-01-01

    Computing limits of functions is a traditional part of mathematical analysis which is very difficult for students. Now an algorithm for the elimination of quantifiers in the field of real numbers is implemented in the program Mathematica. This offers a non-traditional view on this classical theme. (Contains 1 table.)

  3. Using High Resolution Computed Tomography to Visualize the Three Dimensional Structure and Function of Plant Vasculature

    PubMed Central

    McElrone, Andrew J.; Choat, Brendan; Parkinson, Dilworth Y.; MacDowell, Alastair A.; Brodersen, Craig R.

    2013-01-01

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause

  4. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  5. Using high resolution computed tomography to visualize the three dimensional structure and function of plant vasculature.

    PubMed

    McElrone, Andrew J; Choat, Brendan; Parkinson, Dilworth Y; MacDowell, Alastair A; Brodersen, Craig R

    2013-01-01

    High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D) (e.g. Brodersen et al. 2010; 2011; 2012a,b). HRCT imaging is based on the same principles as medical CT systems, but a high intensity synchrotron x-ray source results in higher spatial resolution and decreased image acquisition time. Here, we demonstrate in detail how synchrotron-based HRCT (performed at the Advanced Light Source-LBNL Berkeley, CA, USA) in combination with Avizo software (VSG Inc., Burlington, MA, USA) is being used to explore plant xylem in excised tissue and living plants. This new imaging tool allows users to move beyond traditional static, 2D light or electron micrographs and study samples using virtual serial sections in any plane. An infinite number of slices in any orientation can be made on the same sample, a feature that is physically impossible using traditional microscopy methods. Results demonstrate that HRCT can be applied to both herbaceous and woody plant species, and a range of plant organs (i.e. leaves, petioles, stems, trunks, roots). Figures presented here help demonstrate both a range of representative plant vascular anatomy and the type of detail extracted from HRCT datasets, including scans for coast redwood (Sequoia sempervirens), walnut (Juglans spp.), oak (Quercus spp.), and maple (Acer spp.) tree saplings to sunflowers (Helianthus annuus), grapevines (Vitis spp.), and ferns (Pteridium aquilinum and Woodwardia fimbriata). Excised and dried samples from woody species are easiest to scan and typically yield the best images. However, recent improvements (i.e. more rapid scans and sample stabilization) have made it possible to use this visualization technique on green tissues (e.g. petioles) and in living plants. On occasion some shrinkage of hydrated green plant tissues will cause

  6. High-throughput optogenetic functional magnetic resonance imaging with parallel computations

    PubMed Central

    Fang, Zhongnan; Lee, Jin Hyung

    2013-01-01

    Optogenetic functional magnetic resonance imaging (ofMRI) technology enables cell-type specific, temporally precise neuronal control and accurate, in vivo readout of resulting activity across the whole brain. With the ability to precisely control excitation and inhibition parameters, and to accurately record the resulting activity, there is an increased need for a high-throughput method to bring the ofMRI studies to their full potential. In this paper, an advanced system that can allow real-time fMRI with interactive control and analysis in a fraction of the MRI acquisition repetition time (TR) is proposed. With such high processing speed, sufficient time will be available for integration of future developments that can further enhance ofMRI data quality or better streamline the study. We designed and implemented a highly optimized, massively parallel system using graphics processing unit (GPU)s which achieves reconstruction, motion correction, and analysis of 3D volume data in approximately 12.80 ms. As a result, with a 750 ms TR and 4 interleaf fMRI acquisition, we can now conduct sliding window reconstruction, motion correction, analysis and display in approximately 1.7% of the TR. Therefore, a significant amount of time can now be allocated to integrating advanced but computationally intensive methods that can enable higher image quality and better analysis results all within a TR. Utilizing the proposed high-throughput imaging platform with sliding window reconstruction, we were also able to observe the much-debated initial dips in our ofMRI data. Combined with methods to further improve SNR, the proposed system will enable efficient real-time, interactive, high-throughput ofMRI studies. PMID:23747482

  7. Computational and functional analyses of a small-molecule binding site in ROMK.

    PubMed

    Swale, Daniel R; Sheehan, Jonathan H; Banerjee, Sreedatta; Husni, Afeef S; Nguyen, Thuy T; Meiler, Jens; Denton, Jerod S

    2015-03-10

    The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val(168) or Asn(171) in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321

  8. Computational and Functional Analyses of a Small-Molecule Binding Site in ROMK

    PubMed Central

    Swale, Daniel R.; Sheehan, Jonathan H.; Banerjee, Sreedatta; Husni, Afeef S.; Nguyen, Thuy T.; Meiler, Jens; Denton, Jerod S.

    2015-01-01

    The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val168 or Asn171 in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321

  9. The Reaction Coordinate of a Functional Model of Tyrosinase: Spectroscopic and Computational Characterization

    PubMed Central

    Op’t Holt, Bryan T.; Vance, Michael A.; Mirica, Liviu M.; Stack, T. Daniel P.; Solomon, Edward I.

    2009-01-01

    The μ-η2:η2-peroxodicopper(II) complex synthesized by reacting the Cu(I) complex of the bis-diamine ligand N,N′-di-tert-butyl-ethylenediamine (DBED) with O2 is a functional and spectroscopic model of the coupled binuclear copper protein tyrosinase. This complex reacts with 2,4-di-tert-butylphenolate at low temperature to produce a mixture of the catechol and quinone products, which proceeds through three intermediates (A – C) that have been characterized. A, stabilized at 153K, is characterized as a phenolate-bonded bis-μ-oxo dicopper(III) species, which proceeds at 193K to B, presumably a catecholate-bridged coupled bis-copper(II) species via an electrophilic aromatic substitution mechanism wherein aromatic ring distortion is the rate-limiting step. Isotopic labeling shows that the oxygen inserted into the aromatic substrate during hydroxylation derives from dioxygen, and a late-stage ortho-H+ transfer to an exogenous base is associated with C-O bond formation. Addition of a proton to B produces C, determined from resonance Raman spectra to be a Cu(II)-semiquinone complex. The formation of C (the oxidation of catecholate and reduction to Cu(I)) is governed by the protonation state of the distal bridging oxygen ligand of B. Parallels and contrasts are drawn between the spectroscopically and computationally supported mechanism of the DBED system, presented here, and the experimentally-derived mechanism of the coupled binuclear copper protein tyrosinase. PMID:19368383

  10. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    SciTech Connect

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-21

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.

  11. Complex functionality with minimal computation. Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    DOE PAGESBeta

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; et al

    2015-12-21

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less

  12. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  13. The application of computer assisted technologies (CAT) in the rehabilitation of cognitive functions in psychiatric disorders of childhood and adolescence.

    PubMed

    Srebnicki, Tomasz; Bryńska, Anita

    2016-01-01

    First applications of computer-assisted technologies (CAT) in the rehabilitation of cognitive deficits, including child and adolescent psychiatric disorders date back to the 80's last century. Recent developments in computer technologies, wide access to the Internet and vast expansion of electronic devices resulted in dynamic increase in therapeutic software as well as supporting devices. The aim of computer assisted technologies is the improvement in the comfort and quality of life as well as the rehabilitation of impaired functions. The goal of the article is the presentation of most common computer-assisted technologies used in the therapy of children and adolescents with cognitive deficits as well as the literature review of their effectiveness including the challenges and limitations in regard to the implementation of such interventions. PMID:27556116

  14. Accelerating Scientific Discovery Through Computation and Visualization III. Tight-Binding Wave Functions for Quantum Dots

    PubMed Central

    Sims, James S.; George, William L.; Griffin, Terence J.; Hagedorn, John G.; Hung, Howard K.; Kelso, John T.; Olano, Marc; Peskin, Adele P.; Satterfield, Steven G.; Terrill, Judith Devaney; Bryant, Garnett W.; Diaz, Jose G.

    2008-01-01

    This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology. PMID:27096116

  15. On the computation of structure functions and mass spectra in a relativistic Hamiltonian formalism: A lattice point of view

    NASA Astrophysics Data System (ADS)

    Scheu, Norbert

    1998-11-01

    A non-perturbative computation of scHADRONIC STRUCTURE FUNCTIONS for deep inelastic lepton hadron scattering has not been achieved yet. In this thesis we investigate the viability of the Hamiltonian approach in order to compute hadronic structure functions. In the literature, the so- called scFRONT FORM (FF) approach is favoured over the conventional the scINSTANT FORM (IF)-the conventional Hamiltonian approach-due to claims (a) that structure functions are related to scLIGHT-LIKE CORRELATION FUNCTIONS and (b) that the front form is much simpler for numerical computations. We dispell both claims using general arguments as well as practical computations (in the case of the scSCALAR MODEL and scTWO-DIMENSIONAL QED) demonstrating (a) that structure functions are related to scSPACE-LIKE CORRELATIONS and that (b) the IF is better suited for practical computations if appropriate approximations are introduced. Moreover, we show that the FF is scUNPHYSICAL in general for reasons as follows: (1) the FF constitutes an scINCOMPLETE QUANTISATION of field theories (2) the FF 'predicts' an scINFINITE SPEED OF LIGHT in one space dimension, a scCOMPLETE BREAKDOWN OF MICROCAUSALITY and the scUBIQUITY OF TIME-TRAVEL. Additionally we demonstrate that the FF cannot be approached by so-called ɛ co-ordinates. We demonstrate that these co-ordinates are but the instant form in disguise. The FF cannot be legitimated to be an scEFFECTIVE THEORY. Finally, we demonstrate that the so- called scINFINITE MOMENTUM FRAME is neither physical nor equivalent to the FF.

  16. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  17. [Home computer stabilography: technical level, functional potentialities and spheres of application].

    PubMed

    Sliva, S S

    2005-01-01

    Described, compared and analyzed in the paper are data about sabilographic computer equipment manufactured serially by the leading foreign and Russian companies. Potential spheres of application of stabilographic equipment are discussed. PMID:15757091

  18. Functional Assessment for Human-Computer Interaction: A Method for Quantifying Physical Functional Capabilities for Information Technology Users

    ERIC Educational Resources Information Center

    Price, Kathleen J.

    2011-01-01

    The use of information technology is a vital part of everyday life, but for a person with functional impairments, technology interaction may be difficult at best. Information technology is commonly designed to meet the needs of a theoretical "normal" user. However, there is no such thing as a "normal" user. A user's capabilities will vary over…

  19. Biological master games: using biologists' reasoning to guide algorithm development for integrated functional genomics.

    PubMed

    Breitling, Rainer; Herzyk, Pawel

    2005-01-01

    We review some powerful new algorithms that build on the intuitive biological interpretation techniques for statistical analysis of functional genomics experiments. Although they were originally designed for transcriptomics, we argue that these algorithms are applicable to any type of -omics study (transcriptomics, proteomics, metabolomics). Rank Products (RP), a strictly non-parametric test statistic to detect differentially regulated elements (genes, proteins, metabolites) in genome-wide screens. RP is particularly powerful for noisy data and low numbers of replicates and makes full use of the availability of a large number of parallel measurements that is typical of modern large-scale experiments. Iterative Group Analysis (iGA), a statistical method that makes the transition from regulated single elements to significant classes of elements, and thus provides an automatic functional annotation of an experiment. Graph-based iGA (GiGA), an extension of iGA that combines experimental data with a broad variety of biological annotations to highlight physiologically relevant regions in a given "evidence graph" (e.g., metabolic networks, signaling pathway diagrams, protein interaction maps). The sequential application of these techniques yields an increasingly abstract interpretation of experimental data that is at the same time quantitative, statistically rigorous, and biologically significant. The results can be used either as helpful tools to guide data visualization and exploration, or as the input for downstream computational applications in a systems biology framework. PMID:16209637

  20. On one-dimensional stretching functions for finite-difference calculations. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1979-01-01

    The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.

  1. Enhancing functionality and performance in the PVM network computing system. Period 1 progress report

    SciTech Connect

    Sunderam, V.

    1995-08-01

    The research funded by this grant is part of an ongoing research project in heterogeneous distributed computing with the PVM system, at Emory as well as at Oak Ridge Labs and the University of Tennessee. This grant primarily supports research at Emory that continues to evolve new concepts and systems in distributed computing, but it also includes the PI`s ongoing interaction with the other groups in terms of collaborative research as well as software systems development and maintenance. The research effort at Emory has, in this first project period of the renewal (September 1994-June 1995), focused on (a) I/O frameworks for supporting data management in PVM; (b) evolution of a multithreaded concurrent computing model; and (c) responsive and portable graphical profiling tools for PVM.

  2. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    PubMed

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-01

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently. PMID:26273876

  3. The use of computer graphic techniques for the determination of ventricular function.

    NASA Technical Reports Server (NTRS)

    Sandler, H.; Rasmussen, D.

    1972-01-01

    Description of computer techniques employed to increase the speed, accuracy, reliability, and scope of angiocardiographic analyses determining human heart dimensions. Chamber margins are traced with a Calma 303 digitizer from projections of the angiographic films. The digitized margins of the ventricular images are filed in a computer for subsequent analysis. The margins can be displayed on the television screen of a graphics unit for individual study or they can be viewed in real time (or at any selected speed) to study dynamic changes in the chamber outline. The construction of three dimensional images of the ventricle is described.

  4. Integrating computational modeling and functional assays to decipher the structure-function relationship of influenza virus PB1 protein

    PubMed Central

    Li, Chunfeng; Wu, Aiping; Peng, Yousong; Wang, Jingfeng; Guo, Yang; Chen, Zhigao; Zhang, Hong; Wang, Yongqiang; Dong, Jiuhong; Wang, Lulan; Qin, F. Xiao-Feng; Cheng, Genhong; Deng, Tao; Jiang, Taijiao

    2014-01-01

    The influenza virus PB1 protein is the core subunit of the heterotrimeric polymerase complex (PA, PB1 and PB2) in which PB1 is responsible for catalyzing RNA polymerization and binding to the viral RNA promoter. Among the three subunits, PB1 is the least known subunit so far in terms of its structural information. In this work, by integrating template-based structural modeling approach with all known sequence and functional information about the PB1 protein, we constructed a modeled structure of PB1. Based on this model, we performed mutagenesis analysis for the key residues that constitute the RNA template binding and catalytic (TBC) channel in an RNP reconstitution system. The results correlated well with the model and further identified new residues of PB1 that are critical for RNA synthesis. Moreover, we derived 5 peptides from the sequence of PB1 that form the TBC channel and 4 of them can inhibit the viral RNA polymerase activity. Interestingly, we found that one of them named PB1(491–515) can inhibit influenza virus replication by disrupting viral RNA promoter binding activity of polymerase. Therefore, this study has not only deepened our understanding of structure-function relationship of PB1, but also promoted the development of novel therapeutics against influenza virus. PMID:25424584

  5. Evaluating the Appropriateness of a New Computer-Administered Measure of Adaptive Function for Children and Youth with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng

    2016-01-01

    The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…

  6. Effective thermionic work function measurements of zirconium carbide using a computer-processed image of a thermionic projection microscope pattern

    SciTech Connect

    Mackie, W.A.; Hinrichs, C.H.; Cohen, I.M.; Alin, J.S.; Schnitzler, D.T.; Carleson, P.; Ginn, R.; Krueger, P.; Vetter, C.G. ); Davis, P.R. )

    1990-05-01

    We report on a unique experimental method to determine thermionic work functions of major crystal planes of single crystal zirconium carbide. Applications for transition metal carbides could include cathodes for advanced thermionic energy conversion, radiation immune microcircuitry, {beta}-SiC substrates or high current density field emission cathodes. The primary emphasis of this paper is the analytical method used, that of computer processing a digitized image. ZrC single crystal specimens were prepared by floating zone arc refinement from sintered stock, yielding an average bulk stoichiometry of C/Zr=0.92. A 0.075 cm hemispherical cathode was prepared and mounted in a thermionic projection microscope (TPM) tube. The imaged patterns of thermally emitted electrons taken at various extraction voltages were digitized and computer analyzed to yield currents and corresponding emitting areas for major crystallographic planes. These data were taken at pyrometrically measured temperatures in the range 1700{lt}{ital T}{lt}2200 K. Schottky plots were then used to determine effective thermionic work functions as a function of crystallographic direction and temperature. Work function ordering for various crystal planes is reported through the TPM image processing method. Comparisons are made with effective thermionic and absolute (FERP) work function methods. To support the TPM image processing method, clean tungsten surfaces were examined and results are listed with accepted values.

  7. Computational Modeling of Airway and Pulmonary Vascular Structure and Function: Development of a “Lung Physiome”

    PubMed Central

    Tawhai, M. H.; Clark, A. R.; Donovan, G. M.; Burrowes, K. S.

    2011-01-01

    Computational models of lung structure and function necessarily span multiple spatial and temporal scales, i.e., dynamic molecular interactions give rise to whole organ function, and the link between these scales cannot be fully understood if only molecular or organ-level function is considered. Here, we review progress in constructing multiscale finite element models of lung structure and function that are aimed at providing a computational framework for bridging the spatial scales from molecular to whole organ. These include structural models of the intact lung, embedded models of the pulmonary airways that couple to model lung tissue, and models of the pulmonary vasculature that account for distinct structural differences at the extra- and intra-acinar levels. Biophysically based functional models for tissue deformation, pulmonary blood flow, and airway bronchoconstriction are also described. The development of these advanced multiscale models has led to a better understanding of complex physiological mechanisms that govern regional lung perfusion and emergent heterogeneity during bronchoconstriction. PMID:22011236

  8. Hybrid pattern recognition method using evolutionary computing techniques applied to the exploitation of hyperspectral imagery and medical spectral data

    NASA Astrophysics Data System (ADS)

    Burman, Jerry A.

    1999-12-01

    Hyperspectral image sets are three dimensional data volumes that are difficult to exploit by manual means because they are comprised of multiple bands of image data that are not easily visualized or assessed. GTE Government Systems Corporation has developed a system that utilizes Evolutionary Computing techniques to automatically identify materials in terrain hyperspectral imagery. The system employs sophisticated signature preprocessing and a unique combination of non- parametric search algorithms guided by a model based cost function to achieve rapid convergence and pattern recognition. The system is scaleable and is capable of discriminating and identifying pertinent materials that comprise a specific object of interest in the terrain and estimating the percentage of materials present within a pixel of interest (spectral unmixing). The method has been applied and evaluated against real hyperspectral imagery data from the AVIRIS sensor. In addition, the process has been applied to remotely sensed infrared spectra collected at the microscopic level to assess the amounts of DNA, RNA and protein present in human tissue samples as an aid to the early detection of cancer.

  9. Discourse Functions and Vocabulary Use in English Language Learners' Synchronous Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Rabab'ah, Ghaleb

    2013-01-01

    This study explores the discourse generated by English as a foreign language (EFL) learners using synchronous computer-mediated communication (CMC) as an approach to help English language learners to create social interaction in the classroom. It investigates the impact of synchronous CMC mode on the quantity of total words, lexical range and…

  10. Computational insights into function and inhibition of fatty acid amide hydrolase.

    PubMed

    Palermo, Giulia; Rothlisberger, Ursula; Cavalli, Andrea; De Vivo, Marco

    2015-02-16

    The Fatty Acid Amide Hydrolase (FAAH) enzyme is a membrane-bound serine hydrolase responsible for the deactivating hydrolysis of a family of naturally occurring fatty acid amides. FAAH is a critical enzyme of the endocannabinoid system, being mainly responsible for regulating the level of its main cannabinoid substrate anandamide. For this reason, pharmacological inhibition of FAAH, which increases the level of endogenous anandamide, is a promising strategy to cure a variety of diseases including pain, inflammation, and cancer. Much structural, mutagenesis, and kinetic data on FAAH has been generated over the last couple of decades. This has prompted several informative computational investigations to elucidate, at the atomic-level, mechanistic details on catalysis and inhibition of this pharmaceutically relevant enzyme. Here, we review how these computational studies - based on classical molecular dynamics, full quantum mechanics, and hybrid QM/MM methods - have clarified the binding and reactivity of some relevant substrates and inhibitors of FAAH. We also discuss the experimental implications of these computational insights, which have provided a thoughtful elucidation of the complex physical and chemical steps of the enzymatic mechanism of FAAH. Finally, we discuss how computations have been helpful for building structure-activity relationships of potent FAAH inhibitors. PMID:25240419

  11. Variability in Reading Ability Gains as a Function of Computer-Assisted Instruction Method of Presentation

    ERIC Educational Resources Information Center

    Johnson, Erin Phinney; Perry, Justin; Shamir, Haya

    2010-01-01

    This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…

  12. Computers, Mass Media, and Schooling: Functional Equivalence in Uses of New Media.

    ERIC Educational Resources Information Center

    Lieberman, Debra A.; And Others

    1988-01-01

    Presents a study of 156 California eighth grade students which contrasted their recreational and intellectual computer use in terms of academic performance and use of other media. Among the conclusions were that recreational users watched television heavily and performed poorly in school, whereas intellectual users watched less television,…

  13. A Computation of the Frequency Dependent Dielectric Function for Energetic Materials

    NASA Astrophysics Data System (ADS)

    Zwitter, D. E.; Kuklja, M. M.; Kunz, A. B.

    1999-06-01

    The imaginary part of the dielectric function as a function of frequency is calculated for the solids RDX, TATB, ADN, and PETN. Calculations have been performed including the effects of isotropic and uniaxial pressure. Simple lattice defects are included in some of the calculations.

  14. Computer analysis of protein functional sites projection on exon structure of genes in Metazoa

    PubMed Central

    2015-01-01

    Background Study of the relationship between the structural and functional organization of proteins and their coding genes is necessary for an understanding of the evolution of molecular systems and can provide new knowledge for many applications for designing proteins with improved medical and biological properties. It is well known that the functional properties of proteins are determined by their functional sites. Functional sites are usually represented by a small number of amino acid residues that are distantly located from each other in the amino acid sequence. They are highly conserved within their functional group and vary significantly in structure between such groups. According to this facts analysis of the general properties of the structural organization of the functional sites at the protein level and, at the level of exon-intron structure of the coding gene is still an actual problem. Results One approach to this analysis is the projection of amino acid residue positions of the functional sites along with the exon boundaries to the gene structure. In this paper, we examined the discontinuity of the functional sites in the exon-intron structure of genes and the distribution of lengths and phases of the functional site encoding exons in vertebrate genes. We have shown that the DNA fragments coding the functional sites were in the same exons, or in close exons. The observed tendency to cluster the exons that code functional sites which could be considered as the unit of protein evolution. We studied the characteristics of the structure of the exon boundaries that code, and do not code, functional sites in 11 Metazoa species. This is accompanied by a reduced frequency of intercodon gaps (phase 0) in exons encoding the amino acid residue functional site, which may be evidence of the existence of evolutionary limitations to the exon shuffling. Conclusions These results characterize the features of the coding exon-intron structure that affect the

  15. Cosmic Reionization on Computers: The Faint End of the Galaxy Luminosity Function

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.

    2016-07-01

    Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions at z≳ 6. A commonly used Schechter function approximation with the magnitude cut at {M}{{cut}}˜ -13 provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut {M}{{cut}} is found to vary between ‑12 and ‑14 with a mild redshift dependence. An analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.

  16. Cosmic reionization on computers: The faint end of the galaxy luminosity function

    DOE PAGESBeta

    Gnedin, Nickolay Y.

    2016-07-01

    Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less

  17. High-Throughput Computational Design of Advanced Functional Materials: Topological Insulators and Two-Dimensional Electron Gas Systems

    NASA Astrophysics Data System (ADS)

    Yang, Kesong

    As a rapidly growing area of materials science, high-throughput (HT) computational materials design is playing a crucial role in accelerating the discovery and development of novel functional materials. In this presentation, I will first introduce the strategy of HT computational materials design, and take the HT discovery of topological insulators (TIs) as a practical example to show the usage of such an approach. Topological insulators are one of the most studied classes of novel materials because of their great potential for applications ranging from spintronics to quantum computers. Here I will show that, by defining a reliable and accessible descriptor, which represents the topological robustness or feasibility of the candidate, and by searching the quantum materials repository aflowlib.org, we have automatically discovered 28 TIs (some of them already known) in five different symmetry families. Next, I will talk about our recent research work on the HT computational design of the perovskite-based two-dimensional electron gas (2DEG) systems. The 2DEG formed on the perovskite oxide heterostructure (HS) has potential applications in next-generation nanoelectronic devices. In order to achieve practical implementation of the 2DEG in the device design, desired physical properties such as high charge carrier density and mobility are necessary. Here I show that, using the same strategy with the HT discovery of TIs, by introducing a series of combinatorial descriptors, we have successfully identified a series of candidate 2DEG systems based on the perovskite oxides. This work provides another exemplar of applying HT computational design approach for the discovery of advanced functional materials.

  18. Head sinuses, melon, and jaws of bottlenose dolphins, Tursiops truncatus, observed with computed tomography structural and single photon emission computed tomography functional imaging

    NASA Astrophysics Data System (ADS)

    Ridgway, Sam; Houser, Dorian; Finneran, James J.; Carder, Don; van Bonn, William; Smith, Cynthia; Hoh, Carl; Corbeil, Jacqueline; Mattrey, Robert

    2003-04-01

    The head sinuses, melon, and lower jaws of dolphins have been studied extensively with various methods including radiography, chemical analysis, and imaging of dead specimens. Here we report the first structural and functional imaging of live dolphins. Two animals were imaged, one male and one female. Computed tomography (CT) revealed extensive air cavities posterior and medial to the ear as well as between the ear and sound-producing nasal structures. Single photon emission computed tomography (SPECT) employing 50 mCi of the intravenously injected ligand technetium [Tc-99m] biscisate (Neurolite) revealed extensive and uptake in the core of the melon as well as near the pan bone area of the lower jaw. Count density on SPECT images was four times greater in melon as in the surrounding tissue and blubber layer suggesting that the melon is an active rather than a passive tissue. Since the dolphin temporal bone is not attached to the skull except by fibrous suspensions, the air cavities medial and posterior to the ear as well as the abutment of the temporal bone, to the acoustic fat bodies of each lower jaw, should be considered in modeling the mechanism of sound transmission from the environment to the dolphin ear.

  19. Substrate Tunnels in Enzymes: Structure-Function Relationships and Computational Methodology

    PubMed Central

    Kingsley, Laura J.; Lill, Markus A.

    2015-01-01

    In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein and in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years due to implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with tunnels and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. PMID:25663659

  20. Computationally efficient approach for the minimization of volume constrained vector-valued Ginzburg-Landau energy functional

    NASA Astrophysics Data System (ADS)

    Tavakoli, Rouhollah

    2015-08-01

    The minimization of volume constrained vector-valued Ginzburg-Landau energy functional is considered in the present study. It has many applications in computational science and engineering, like the conservative phase separation in multiphase systems (such as the spinodal decomposition), phase coarsening in multiphase systems, color image segmentation and optimal space partitioning. A computationally efficient algorithm is presented to solve the space discretized form of the original optimization problem. The algorithm is based on the constrained nonmonotone L2 gradient flow of Ginzburg-Landau functional followed by a regularization step, which is resulted from the Tikhonov regularization term added to the objective functional, that lifts the solution from the L2 function space into H1 space. The regularization step not only improves the convergence rate of the presented algorithm, but also increases its stability bound. The step-size selection based on the Barzilai-Borwein approach is adapted to improve the convergence rate of the introduced algorithm. The success and performance of the presented approach is demonstrated throughout several numerical experiments. To make it possible to reproduce the results presented in this work, the MATLAB implementation of the presented algorithm is provided as the supplementary material.

  1. Use of 4-Dimensional Computed Tomography-Based Ventilation Imaging to Correlate Lung Dose and Function With Clinical Outcomes

    SciTech Connect

    Vinogradskiy, Yevgeniy; Castillo, Richard; Castillo, Edward; Department of Computational and Applied Mathematics, Rice University, Houston, Texas ; Tucker, Susan L.; Liao, Zhongxing; Guerrero, Thomas; Department of Computational and Applied Mathematics, Rice University, Houston, Texas ; Martel, Mary K.

    2013-06-01

    Purpose: Four-dimensional computed tomography (4DCT)-based ventilation is an emerging imaging modality that can be used in the thoracic treatment planning process. The clinical benefit of using ventilation images in radiation treatment plans remains to be tested. The purpose of the current work was to test the potential benefit of using ventilation in treatment planning by evaluating whether dose to highly ventilated regions of the lung resulted in increased incidence of clinical toxicity. Methods and Materials: Pretreatment 4DCT data were used to compute pretreatment ventilation images for 96 lung cancer patients. Ventilation images were calculated using 4DCT data, deformable image registration, and a density-change based algorithm. Dose–volume and ventilation-based dose function metrics were computed for each patient. The ability of the dose–volume and ventilation-based dose–function metrics to predict for severe (grade 3+) radiation pneumonitis was assessed using logistic regression analysis, area under the curve (AUC) metrics, and bootstrap methods. Results: A specific patient example is presented that demonstrates how incorporating ventilation-based functional information can help separate patients with and without toxicity. The logistic regression significance values were all lower for the dose–function metrics (range P=.093-.250) than for their dose–volume equivalents (range, P=.331-.580). The AUC values were all greater for the dose–function metrics (range, 0.569-0.620) than for their dose–volume equivalents (range, 0.500-0.544). Bootstrap results revealed an improvement in model fit using dose–function metrics compared to dose–volume metrics that approached significance (range, P=.118-.155). Conclusions: To our knowledge, this is the first study that attempts to correlate lung dose and 4DCT ventilation-based function to thoracic toxicity after radiation therapy. Although the results were not significant at the .05 level, our data suggests

  2. Development of microgravity, full body functional reach envelope using 3-D computer graphic models and virtual reality technology

    NASA Technical Reports Server (NTRS)

    Lindsey, Patricia F.

    1994-01-01

    In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.

  3. When can Empirical Green Functions be computed from Noise Cross-Correlations? Hints from different Geographical and Tectonic environments

    NASA Astrophysics Data System (ADS)

    Matos, Catarina; Silveira, Graça; Custódio, Susana; Domingues, Ana; Dias, Nuno; Fonseca, João F. B.; Matias, Luís; Krueger, Frank; Carrilho, Fernando

    2014-05-01

    Noise cross-correlations are now widely used to extract Green functions between station pairs. But, do all the cross-correlations routinely computed produce successful Green Functions? What is the relationship between noise recorded in a couple of stations and the cross-correlation between them? During the last decade, we have been involved in the deployment of several temporary dense broadband (BB) networks within the scope of both national projects and international collaborations. From 2000 to 2002, a pool of 8 BB stations continuously operated in the Azores in the scope of the Memorandum of Understanding COSEA (COordinated Seismic Experiment in the Azores). Thanks to the Project WILAS (West Iberia Lithosphere and Astenosphere Structure, PTDC/CTE-GIX/097946/2008) we temporarily increased the number of BB deployed in mainland Portugal to more than 50 (permanent + temporary) during the period 2010 - 2012. In 2011/12 a temporary pool of 12 seismometers continuously recorded BB data in the Madeira archipelago, as part of the DOCTAR (Deep Ocean Test Array Experiment) project. Project CV-PLUME (Investigation on the geometry and deep signature of the Cape Verde mantle plume, PTDC/CTE-GIN/64330/2006) covered the archipelago of Cape Verde, North Atlantic, with 40 temporary BB stations in 2007/08. Project MOZART (Mozambique African Rift Tomography, PTDC/CTE-GIX/103249/2008), covered Mozambique, East Africa, with 30 temporary BB stations in the period 2011 - 2013. These networks, located in very distinct geographical and tectonic environments, offer an interesting opportunity to study seasonal and spatial variations of noise sources and their impact on Empirical Green functions computed from noise cross-correlation. Seismic noise recorded at different seismic stations is evaluated by computation of the probability density functions of power spectral density (PSD) of continuous data. To assess seasonal variations of ambient noise sources in frequency content, time-series of

  4. Computing texture boundaries from images.

    PubMed

    Voorhees, H; Poggio, T

    1988-05-26

    Recent computational and psychological theories of human texture vision assert that texture discrimination is based on first-order differences in geometric and luminance attributes of texture elements, called 'textons'. Significant differences in the density, orientation, size, or contrast of line segments or other small features in an image have been shown to cause immediate perception of texture boundaries. However, the psychological theories, which are based on the perception of synthetic images composed of lines and symbols, neglect two important issues. First, how can textons be computed from grey-level images of natural scenes? And second, how, exactly, can texture boundaries be found? Our analysis of these two issues has led to an algorithm that is fully implemented and which successfully detects boundaries in natural images. We propose that blobs computed by a centre-surround operator are useful as texture elements, and that a simple non-parametric statistic can be used to compare local distributions of blob attributes to locate texture boundaries. Although designed for natural images, our computation agrees with some psychophysical findings, in particular, those of Adelson and Bergen (described in the preceding article), which cast doubt on the hypothesis that line segment crossings or termination points are textons. PMID:3374570

  5. A first principle approach using Maximally Localized Wannier Functions for computing and understanding elasto-optic reponse

    NASA Astrophysics Data System (ADS)

    Liang, Xin; Ismail-Beigi, Sohrab

    Strain-induced changes of optical properties are of use in the design and functioning of devices that couple photons and phonons. The elasto-optic (or photo-elastic) effect describes a general materials property where strain induces a change in the dielectric tensor. Despite a number of experimental and computational works, it is fair to say that a basic physical understanding of the effect and its materials dependence is lacking: e.g., we know of no materials design rule for enhancing or suppressing elasto-optic response. Based on our previous work, we find that a real space representation, as opposed to a k-space description, is a promising way to understand this effect. We have finished the development of a method of computing the dielectric and elasto-optic tensors using Maximally Localized Wannier Functions (MLWFs). By analyzing responses to uniaxial strain, we find that both tensors respond in a localized manner to the perturbation: the dominant optical transitions are between local electronic states on nearby bonds. We describe the method, the resulting physical picture and computed results for semiconductors. This work is supported by the National Science Foundation through Grant NSF DMR-1104974.

  6. An atomic orbital based real-time time-dependent density functional theory for computing electronic circular dichroism band spectra

    NASA Astrophysics Data System (ADS)

    Goings, Joshua J.; Li, Xiaosong

    2016-06-01

    One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entire ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.

  7. Tetralogy of Fallot Cardiac Function Evaluation and Intelligent Diagnosis Based on Dual-Source Computed Tomography Cardiac Images.

    PubMed

    Cai, Ken; Rongqian, Yang; Li, Lihua; Xie, Zi; Ou, Shanxing; Chen, Yuke; Dou, Jianhong

    2016-05-01

    Tetralogy of Fallot (TOF) is the most common complex congenital heart disease (CHD) of the cyanotic type. Studies on ventricular functions have received an increasing amount of attention as the development of diagnosis and treatment technology for CHD continues to advance. Reasonable options for imaging examination and accurate assessment of preoperative and postoperative left ventricular functions of TOF patients are important in improving the cure rate of TOF radical operation, therapeutic evaluation, and judgment prognosis. Therefore, with the aid of dual-source computed tomography (DSCT), cardiac images with high temporal resolution and high definition, we measured the left ventricular time-volume curve using image data and calculating the left ventricular function parameters to conduct the preliminary evaluation on TOF patients. To comprehensively evaluate the cardiac function, the segmental ventricular wall function parameters were measured, and the measurement results were mapped to a bull's eye diagram to realize the standardization of segmental ventricular wall function evaluation. Finally, we introduced a new clustering method based on auto-regression model parameters and combined this method with Euclidean distance measurements to establish an intelligent diagnosis of TOF. The results of this experiment show that the TOF evaluation and the intelligent diagnostic methods proposed in this article are feasible. PMID:26496001

  8. The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias

    2010-12-01

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  9. The van Hove distribution function for brownian hard spheres: dynamical test particle theory and computer simulations for bulk dynamics.

    PubMed

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J; Schmidt, Matthias

    2010-12-14

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities. PMID:21171689

  10. Computing zeros of analytic functions in the complex plane without using derivatives

    NASA Astrophysics Data System (ADS)

    Gillan, C. J.; Schuchinsky, A.; Spence, I.

    2006-08-01

    We present a package in Fortran 90 which solves f(z)=0, where z∈W⊂C without requiring the evaluation of derivatives, f(z). W is bounded by a simple closed curve and f(z) must be holomorphic within W. We have developed and tested the package to support our work in the modeling of high frequency and optical wave guiding and resonant structures. The respective eigenvalue problems are particularly challenging because they require the high precision computation of all multiple complex roots of f(z) confined to the specified finite domain. Generally f(z), despite being holomorphic, does not have explicit analytical form thereby inhibiting evaluation of its derivatives. Program summaryTitle of program:EZERO Catalogue identifier:ADXY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXY_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer:IBM compatible desktop PC Operating system:Fedora Core 2 Linux (with 2.6.5 kernel) Programming languages used:Fortran 90 No. of bits in a word:32 No. of processors used:one Has the code been vectorized:no No. of lines in distributed program, including test data, etc.:21045 Number of bytes in distributed program including test data, etc.:223 756 Distribution format:tar.gz Peripherals used:none Method of solution:Our package uses the principle of the argument to count the number of zeros encompassed by a contour and then computes estimates for the zeros. Refined results for each zero are obtained by application of the derivative-free Halley method with or without Aitken acceleration, as the user wishes.

  11. Using brain-computer interfaces to induce neural plasticity and restore function

    NASA Astrophysics Data System (ADS)

    Grosse-Wentrup, Moritz; Mattia, Donatella; Oweiss, Karim

    2011-04-01

    Analyzing neural signals and providing feedback in realtime is one of the core characteristics of a brain-computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI technology for therapeutic purposes is increasingly gaining popularity in the BCI community. In this paper, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. We conclude with a list of open questions and recommendations for future research in this field.

  12. Computer program for supersonic Kernel-function flutter analysis of thin lifting surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, H. J.

    1974-01-01

    This report describes a computer program (program D2180) that has been prepared to implement the analysis described in (N71-10866) for calculating the aerodynamic forces on a class of harmonically oscillating planar lifting surfaces in supersonic potential flow. The planforms treated are the delta and modified-delta (arrowhead) planforms with subsonic leading and supersonic trailing edges, and (essentially) pointed tips. The resulting aerodynamic forces are applied in a Galerkin modal flutter analysis. The required input data are the flow and planform parameters including deflection-mode data, modal frequencies, and generalized masses.

  13. Fast computation of the Gauss hypergeometric function with all its parameters complex with application to the Pöschl Teller Ginocchio potential wave functions

    NASA Astrophysics Data System (ADS)

    Michel, N.; Stoitsov, M. V.

    2008-04-01

    The fast computation of the Gauss hypergeometric function F12 with all its parameters complex is a difficult task. Although the F12 function verifies numerous analytical properties involving power series expansions whose implementation is apparently immediate, their use is thwarted by instabilities induced by cancellations between very large terms. Furthermore, small areas of the complex plane, in the vicinity of z=e, are inaccessible using F12 power series linear transformations. In order to solve these problems, a generalization of R.C. Forrey's transformation theory has been developed. The latter has been successful in treating the F12 function with real parameters. As in real case transformation theory, the large canceling terms occurring in F12 analytical formulas are rigorously dealt with, but by way of a new method, directly applicable to the complex plane. Taylor series expansions are employed to enter complex areas outside the domain of validity of power series analytical formulas. The proposed algorithm, however, becomes unstable in general when |a|, |b|, |c| are moderate or large. As a physical application, the calculation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential involving F12 evaluations is considered. Program summaryProgram title: hyp_2F1, PTG_wf Catalogue identifier: AEAE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6839 No. of bytes in distributed program, including test data, etc.: 63 334 Distribution format: tar.gz Programming language: C++, Fortran 90 Computer: Intel i686 Operating system: Linux, Windows Word size: 64 bits Classification: 4.7 Nature of problem: The Gauss hypergeometric function F12, with all its parameters complex, is uniquely

  14. Accuracy and computational efficiency of real-time subspace propagation schemes for the time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman

    2016-05-01

    Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales.

  15. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  16. Reverse energy partitioning-An efficient algorithm for computing the density of states, partition functions, and free energy of solids.

    PubMed

    Do, Hainam; Wheatley, Richard J

    2016-08-28

    A robust and model free Monte Carlo simulation method is proposed to address the challenge in computing the classical density of states and partition function of solids. Starting from the minimum configurational energy, the algorithm partitions the entire energy range in the increasing energy direction ("upward") into subdivisions whose integrated density of states is known. When combined with the density of states computed from the "downward" energy partitioning approach [H. Do, J. D. Hirst, and R. J. Wheatley, J. Chem. Phys. 135, 174105 (2011)], the equilibrium thermodynamic properties can be evaluated at any temperature and in any phase. The method is illustrated in the context of the Lennard-Jones system and can readily be extended to other molecular systems and clusters for which the structures are known. PMID:27586913

  17. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids.

    PubMed

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  18. Density functional computational studies on the glucose and glycine Maillard reaction: Formation of the Amadori rearrangement products

    NASA Astrophysics Data System (ADS)

    Jalbout, Abraham F.; Roy, Amlan K.; Shipar, Abul Haider; Ahmed, M. Samsuddin

    Theoretical energy changes of various intermediates leading to the formation of the Amadori rearrangement products (ARPs) under different mechanistic assumptions have been calculated, by using open chain glucose (O-Glu)/closed chain glucose (A-Glu and B-Glu) and glycine (Gly) as a model for the Maillard reaction. Density functional theory (DFT) computations have been applied on the proposed mechanisms under different pH conditions. Thus, the possibility of the formation of different compounds and electronic energy changes for different steps in the proposed mechanisms has been evaluated. B-Glu has been found to be more efficient than A-Glu, and A-Glu has been found more efficient than O-Glu in the reaction. The reaction under basic condition is the most favorable for the formation of ARPs. Other reaction pathways have been computed and discussed in this work.0

  19. Accuracy and computational efficiency of real-time subspace propagation schemes for the time-dependent density functional theory.

    PubMed

    Russakoff, Arthur; Li, Yonghui; He, Shenglai; Varga, Kalman

    2016-05-28

    Time-dependent Density Functional Theory (TDDFT) has become successful for its balance of economy and accuracy. However, the application of TDDFT to large systems or long time scales remains computationally prohibitively expensive. In this paper, we investigate the numerical stability and accuracy of two subspace propagation methods to solve the time-dependent Kohn-Sham equations with finite and periodic boundary conditions. The bases considered are the Lánczos basis and the adiabatic eigenbasis. The results are compared to a benchmark fourth-order Taylor expansion of the time propagator. Our results show that it is possible to use larger time steps with the subspace methods, leading to computational speedups by a factor of 2-3 over Taylor propagation. Accuracy is found to be maintained for certain energy regimes and small time scales. PMID:27250297

  20. Critical assessment of density functional theory for computing vibrational (hyper)polarizabilities

    NASA Astrophysics Data System (ADS)

    Zaleśny, R.; Bulik, I. W.; Mikołajczyk, M.; Bartkowiak, W.; Luis, J. M.; Kirtman, B.; Avramopoulos, A.; Papadopoulos, M. G.

    2012-12-01

    Despite undisputed success of the density functional theory (DFT) in various branches of chemistry and physics, an application of the DFT for reliable predictions of nonlinear optical properties of molecules has been questioned a decade ago. As it was shown by Champagne, et al. [1, 2, 3] most conventional DFT schemes were unable to qualitatively predict the response of conjugated oligomers to a static electric field. Long-range corrected (LRC) functionals, like LC-BLYP or CAM-B3LYP, have been proposed to alleviate this deficiency. The reliability of LRC functionals for evaluating molecular (hyper)polarizabilities is studied for various groups of organic systems, with a special focus on vibrational corrections to the electric properties.

  1. Evaluation of cardiac function and myocardial viability with 16- and 64-slice multidetector computed tomography.

    PubMed

    Kopp, Andreas F; Heuschmid, Martin; Reimann, Anja; Kuettner, Axel; Beck, Thorsten; Ohmer, Martin; Burgstahler, Christoph; Brodoefel, Harald; Claussen, Claus D; Schroeder, Stephen

    2005-11-01

    Retrospectively ECG-gated MDCT shows a high correlation and acceptable agreement of left-ventricular functional parameters compared to MR imaging. Thus, in addition to the non-invasive evaluation of coronary arteries, further important additional information of left-ventricular functional parameters with clinical and prognostic relevance can be achieved by one single MDCT examination. For assessment of myocardial viability, low-dose CT late enhancement scanning is feasible, and preliminary results look promising. CT late enhancement adds valuable diagnostic information on the haemodynamical significance of coronary stenoses or prior to interventional procedures. PMID:16479639

  2. The Secrets of a Functional Synapse – From a Computational and Experimental Viewpoint

    PubMed Central

    Linial, Michal

    2006-01-01

    Background Neuronal communication is tightly regulated in time and in space. The neuronal transmission takes place in the nerve terminal, at a specialized structure called the synapse. Following neuronal activation, an electrical signal triggers neurotransmitter (NT) release at the active zone. The process starts by the signal reaching the synapse followed by a fusion of the synaptic vesicle and diffusion of the released NT in the synaptic cleft; the NT then binds to the appropriate receptor, and as a result, a potential change at the target cell membrane is induced. The entire process lasts for only a fraction of a millisecond. An essential property of the synapse is its capacity to undergo biochemical and morphological changes, a phenomenon that is referred to as synaptic plasticity. Results In this survey, we consider the mammalian brain synapse as our model. We take a cell biological and a molecular perspective to present fundamental properties of the synapse:(i) the accurate and efficient delivery of organelles and material to and from the synapse; (ii) the coordination of gene expression that underlies a particular NT phenotype; (iii) the induction of local protein expression in a subset of stimulated synapses. We describe the computational facet and the formulation of the problem for each of these topics. Conclusion Predicting the behavior of a synapse under changing conditions must incorporate genomics and proteomics information with new approaches in computational biology. PMID:16723009

  3. Density-functional computation of ⁹³Nb NMR chemical shifts.

    PubMed

    Bühl, Michael; Wrackmeyer, Bernd

    2010-12-01

    93Nb chemical shifts of [NbX6](-) (X = Cl, F, CO), [NbXCl4](-) (X = O, S), Nb2(OMe)10, Cp*2Nb(κ2-BH4), (Cp*Nb)2(µ-B2H6)2, CpNb(CO)4, and Cp2NbH3 are computed at the GIAO (gauge-including atomic orbitals)-, BPW91- and B3LYP-, and CSGT (continuous set of gauge transformations)-CAM-B3LYP, -ωB97, and -ωB97X levels, using BP86-optimized or experimental (X-ray) geometries. Experimental chemical shifts are best reproduced at the GIAO-BPW91 level when δ(93Nb) values of inorganic complexes are referenced directly relative to [NbCl6](-) and those of organometallic species are first calculated relative to [Nb(CO)6](-). An inadvertent error in the reported δ(93Nb) values of cyclopentadiene borane complexes (H. Brunner et al., J. Organomet. Chem.1992, 436, 313) is corrected. Trends in the observed 93Nb NMR linewidths for anionic niobates [Nb(CO)5](3-), [Nb(CO)5H](2-), and [Nb(CO)5(NH3)](-) are rationalized in terms of computed electric field gradients at the metal. PMID:20552575

  4. Studying the Chemistry of Cationized Triacylglycerols Using Electrospray Ionization Mass Spectrometry and Density Functional Theory Computations

    NASA Astrophysics Data System (ADS)

    Grossert, J. Stuart; Herrera, Lisandra Cubero; Ramaley, Louis; Melanson, Jeremy E.

    2014-08-01

    Analysis of triacylglycerols (TAGs), found as complex mixtures in living organisms, is typically accomplished using liquid chromatography, often coupled to mass spectrometry. TAGs, weak bases not protonated using electrospray ionization, are usually ionized by adduct formation with a cation, including those present in the solvent (e.g., Na+). There are relatively few reports on the binding of TAGs with cations or on the mechanisms by which cationized TAGs fragment. This work examines binding efficiencies, determined by mass spectrometry and computations, for the complexation of TAGs to a range of cations (Na+, Li+, K+, Ag+, NH4 +). While most cations bind to oxygen, Ag+ binding to unsaturation in the acid side chains is significant. The importance of dimer formation, [2TAG + M]+ was demonstrated using several different types of mass spectrometers. From breakdown curves, it became apparent that two or three acid side chains must be attached to glycerol for strong cationization. Possible mechanisms for fragmentation of lithiated TAGs were modeled by computations on tripropionylglycerol. Viable pathways were found for losses of neutral acids and lithium salts of acids from different positions on the glycerol moiety. Novel lactone structures were proposed for the loss of a neutral acid from one position of the glycerol moiety. These were studied further using triple-stage mass spectrometry (MS3). These lactones can account for all the major product ions in the MS3 spectra in both this work and the literature, which should allow for new insights into the challenging analytical methods needed for naturally occurring TAGs.

  5. An evolutionary computational theory of prefrontal executive function in decision-making.

    PubMed

    Koechlin, Etienne

    2014-11-01

    The prefrontal cortex subserves executive control and decision-making, that is, the coordination and selection of thoughts and actions in the service of adaptive behaviour. We present here a computational theory describing the evolution of the prefrontal cortex from rodents to humans as gradually adding new inferential Bayesian capabilities for dealing with a computationally intractable decision problem: exploring and learning new behavioural strategies versus exploiting and adjusting previously learned ones through reinforcement learning (RL). We provide a principled account identifying three inferential steps optimizing this arbitration through the emergence of (i) factual reactive inferences in paralimbic prefrontal regions in rodents; (ii) factual proactive inferences in lateral prefrontal regions in primates and (iii) counterfactual reactive and proactive inferences in human frontopolar regions. The theory clarifies the integration of model-free and model-based RL through the notion of strategy creation. The theory also shows that counterfactual inferences in humans yield to the notion of hypothesis testing, a critical reasoning ability for approximating optimal adaptive processes and presumably endowing humans with a qualitative evolutionary advantage in adaptive behaviour. PMID:25267817

  6. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.

  7. Computational functional genomics based analysis of pain-relevant micro-RNAs.

    PubMed

    Lötsch, Jörn; Niederberger, Ellen; Ultsch, Alfred

    2015-11-01

    Micro-ribonucleic acids (miRNAs) play a role in pain, based on studies on models of neuropathic or inflammatory pain and clinical evidence. The present analysis made extensive use of computational biology, knowledge discovery methods, publicly available databases and data mining tools to merge results from genetic and miRNA research into an analysis of the systems biological roles of miRNAs in pain. We identified that about one-third of miRNAs detected through nociceptive research have been associated with a mere 18 regulated genes. Substituting the missing genetic information by computational data mining and based on comprehensive current empirical evidence of gene versus miRNA interactions, we have identified a total of 130 pain genes as being probably regulated by a total of 167 different miRNAs. Particularly pain-relevant roles of miRNAs include the control of gene expression at any level and regulation of interleukin-6-related pain entities. Among the miRNAs regulating pain genes are seven that are brain specific, hinting at their therapeutic utility for modulating central nervous mechanisms of pain. PMID:26385553

  8. Study of space shuttle orbiter system management computer function. Volume 2: Automated performance verification concepts

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The findings are presented of investigations on concepts and techniques in automated performance verification. The investigations were conducted to provide additional insight into the design methodology and to develop a consolidated technology base from which to analyze performance verification design approaches. Other topics discussed include data smoothing, function selection, flow diagrams, data storage, and shuttle hydraulic systems.

  9. Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Charon is a software toolkit that enables engineers to develop high-performing message-passing programs in a convenient and piecemeal fashion. Emphasis is on rapid program development and prototyping. In this report a detailed description of the functional design of the toolkit is presented. It is illustrated by the stepwise parallelization of two representative code examples.

  10. Computer Simulation for Calculating the Second-Order Correlation Function of Classical and Quantum Light

    ERIC Educational Resources Information Center

    Facao, M.; Lopes, A.; Silva, A. L.; Silva, P.

    2011-01-01

    We propose an undergraduate numerical project for simulating the results of the second-order correlation function as obtained by an intensity interference experiment for two kinds of light, namely bunched light with Gaussian or Lorentzian power density spectrum and antibunched light obtained from single-photon sources. While the algorithm for…

  11. Neural network models of cortical functions based on the computational properties of the cerebral cortex.

    PubMed

    Guigon, E; Grandguillaume, P; Otto, I; Boutkhil, L; Burnod, Y

    1994-01-01

    We describe a biologically plausible modelling framework based on the architectural and processing characteristics of the cerebral cortex. Its key feature is a multicellular processing unit (cortical column) reflecting the modular nature of cortical organization and function. In this framework, we describe a neural network model organization and function. In this framework, we describe a neural network model of the neuronal circuits of the cerebral cortex that learn different functions associated with different parts of the cortex: 1) visual integration for invariant pattern recognition, performed by a cooperation between temporal and parietal areas; 2) visual-to-motor transformation for 3D arm reaching movements, performed by parietal and motor areas; and 3) temporal integration and storage of sensorimotor programs, performed by networks linking the prefrontal cortex to associative sensory and motor areas. The architecture of the network is inspired from the features of the architecture of cortical pathways involved in these functions. We propose two rules which describe neural processing and plasticity in the network. The first rule (adaptive tuning if gating) is an analog of operant conditioning and permits to learn to anticipate an action. The second rule (adaptive timing) is based on a bistable state of activity and permits to learn temporally separate events forming a behavioral sequence. PMID:7787829

  12. A Unit on Slope Functions--Using a Computer in Mathematics Class.

    ERIC Educational Resources Information Center

    Lappan, Glenda; Winter, M. J.

    1982-01-01

    An introductory unit on slope, designed to give students a chance to discover some of the basic relationships between functions and slopes, is described. Programs written in BASIC for PET microcomputers were used. It is felt that students have the background to understand derivatives after experiences with this unit. (MP)

  13. A Mobile Computing Solution for Collecting Functional Analysis Data on a Pocket PC

    ERIC Educational Resources Information Center

    Jackson, James; Dixon, Mark R.

    2007-01-01

    The present paper provides a task analysis for creating a computerized data system using a Pocket PC and Microsoft Visual Basic. With Visual Basic software and any handheld device running the Windows MOBLE operating system, this task analysis will allow behavior analysts to program and customize their own functional analysis data-collection…

  14. Computational modeling to predict mechanical function of joints: application to the lower leg with simulation of two cadaver studies.

    PubMed

    Liacouras, Peter C; Wayne, Jennifer S

    2007-12-01

    Computational models of musculoskeletal joints and limbs can provide useful information about joint mechanics. Validated models can be used as predictive devices for understanding joint function and serve as clinical tools for predicting the outcome of surgical procedures. A new computational modeling approach was developed for simulating joint kinematics that are dictated by bone/joint anatomy, ligamentous constraints, and applied loading. Three-dimensional computational models of the lower leg were created to illustrate the application of this new approach. Model development began with generating three-dimensional surfaces of each bone from CT images and then importing into the three-dimensional solid modeling software SOLIDWORKS and motion simulation package COSMOSMOTION. Through SOLIDWORKS and COSMOSMOTION, each bone surface file was filled to create a solid object and positioned necessary components added, and simulations executed. Three-dimensional contacts were added to inhibit intersection of the bones during motion. Ligaments were represented as linear springs. Model predictions were then validated by comparison to two different cadaver studies, syndesmotic injury and repair and ankle inversion following ligament transection. The syndesmotic injury model was able to predict tibial rotation, fibular rotation, and anterior/posterior displacement. In the inversion simulation, calcaneofibular ligament extension and angles of inversion compared well. Some experimental data proved harder to simulate accurately, due to certain software limitations and lack of complete experimental data. Other parameters that could not be easily obtained experimentally can be predicted and analyzed by the computational simulations. In the syndesmotic injury study, the force generated in the tibionavicular and calcaneofibular ligaments reduced with the insertion of the staple, indicating how this repair technique changes joint function. After transection of the calcaneofibular

  15. Computing the partition function and sampling for saturated secondary structures of RNA, with respect to the Turner energy model.

    PubMed

    Waldispühl, J; Clote, P

    2007-03-01

    An RNA secondary structure is saturated if no base pairs can be added without violating the definition of secondary structure. Here we describe a new algorithm, RNAsat, which for a given RNA sequence a, an integral temperature 0 computes the Boltzmann partition function Z(k)(T)(a) = SigmaSepsilonSAT(k)(a) exp(-E(S)/RT), where the sum is over all saturated secondary structures of a which have exactly k base pairs, R is the universal gas constant and E(S) denotes the free energy with respect to the Turner nearest neighbor energy model. By dynamic programming, we compute Z(k)(T)simultaneously for all values of k in time O(n(5)) and space O(n(3)).Additionally, RNAsat computes the partition function Q(k)(T)(a) = SigmaSepsilonS(k)(a) exp(-E(S)/RT), where the sum is over all secondary structures of a which have k base pairs; the latter computation is performed simultaneously for all values of k in O(n(4)) time and O(n(3)) space. Lastly, using the partition function Z(k)(T) [resp. Q(k)(T)] with stochastic backtracking, RNAsat rigorously samples the collection of saturated secondary structures [resp. secondary structures] having k base pairs; for Q(k)(T) this provides a parametrized form of Sfold sampling (Ding and Lawrence, 2003). Using RNAsat, (i) we compute the ensemble free energy for saturated secondary structures having k base pairs, (ii) show cooperativity of the Turner model, (iii) demonstrate a temperature-dependent phase transition, (iv) illustrate the predictive advantage of RNAsat for precursor microRNA cel-mir-72 of C. elegans and for the pseudoknot PKB 00152 of Pseudobase (van Batenburg et al., 2001), (v) illustrate the RNA shapes (Giegerich et al., 2004) of sampled secondary structures [resp. saturated structures] having exactly k base pairs. A web server for RNAsat is under construction at bioinformatics.bc.edu/clotelab/RNAsat/. PMID:17456015

  16. An analytic function expansion approach to computing perturbations from extreme-mass-ratio binaries with eccentric orbits

    NASA Astrophysics Data System (ADS)

    Evans, Charles; Forseth, Erik; Hopper, Seth

    2015-04-01

    Several groups (Fujita 2012; Shah, Friedman, and Whiting 2014; Shah 2014; Fujita 2014) have recently described results from computing gravitational perturbations and the self-force at extraordinarily high precision for binaries with circular orbits in the extreme-mass-ratio limit. These calculations have allowed comparison with post-Newtonian (PN) theory at the lowest order in the mass ratio and uncovered new terms and coefficients in the PN expansion for circular orbits. We describe a new means of extending this analytic function expansion approach to include binaries with eccentric orbits, thus allowing terms in the known 3PN order expansion to be verified and to discover new terms beyond 3PN.

  17. Linking impulse response functions to reaction time: Rod and cone reaction time data and a computational model

    PubMed Central

    Cao, Dingcai; Zele, Andrew J.; Pokorny, Joel

    2007-01-01

    Reaction times for incremental and decremental stimuli were measured at five suprathreshold contrasts for six retinal illuminance levels where rods alone (0.002–0.2 Trolands), rods and cones (2–20 Trolands) or cones alone (200 Trolands) mediated detection. A 4-primary photostimulator allowed independent control of rod or cone excitations. This is the first report of reaction times to isolated rod or cone stimuli at mesopic light levels under the same adaptation conditions. The main findings are: 1) For rods, responses to decrements were faster than increments, but cone reaction times were closely similar. 2) At light levels where both systems were functional, rod reaction times were ~20 ms longer. The data were fitted with a computational model that incorporates rod and cone impulse response functions and a stimulus-dependent neural sensory component that triggers a motor response. Rod and cone impulse response functions were derived from published psychophysical two-pulse threshold data and temporal modulation transfer functions. The model fits were accomplished with a limited number of free parameters: two global parameters to estimate the irreducible minimum reaction time for each receptor type, and one local parameter for each reaction time versus contrast function. This is the first model to provide a neural basis for the variation in reaction time with retinal illuminance, stimulus contrast, stimulus polarity, and receptor class modulated. PMID:17346763

  18. Effects of a standing and three dynamic workstations on computer task performance and cognitive function tests.

    PubMed

    Commissaris, Dianne A C M; Könemann, Reinier; Hiemstra-van Mastrigt, Suzanne; Burford, Eva-Maria; Botter, Juliane; Douwes, Marjolein; Ellegast, Rolf P

    2014-11-01

    Sedentary work entails health risks. Dynamic (or active) workstations, at which computer tasks can be combined with physical activity, may reduce the risks of sedentary behaviour. The aim of this study was to evaluate short term task performance while working on three dynamic workstations: a treadmill, an elliptical trainer, a bicycle ergometer and a conventional standing workstation. A standard sitting workstation served as control condition. Fifteen Dutch adults performed five standardised but common office tasks in an office-like laboratory setting. Both objective and perceived work performance were measured. With the exception of high precision mouse tasks, short term work performance was not affected by working on a dynamic or a standing workstation. The participant's perception of decreased performance might complicate the acceptance of dynamic workstations, although most participants indicate that they would use a dynamic workstation if available at the workplace. PMID:24951234

  19. Computational Evidence for the Catalytic Mechanism of Tyrosylprotein Sulfotransferases: A Density Functional Theory Investigation.

    PubMed

    Marforio, Tainah Dorina; Giacinto, Pietro; Bottoni, Andrea; Calvaresi, Matteo

    2015-07-21

    In this paper we have examined the mechanism of tyrosine O-sulfonation catalyzed by human TPST-2. Our computations, in agreement with Teramoto's hypothesis, indicate a concerted SN2-like reaction (with an activation barrier of 18.2 kcal mol(-1)) where the tyrosine oxygen is deprotonated by Glu(99) (base catalyst) and simultaneously attacks as a nucleophile the sulfuryl group. For the first time, using a quantum mechanics protocol of alanine scanning, we identified unequivocally the role of the amino acids involved in the catalysis. Arg(78) acts as a shuttle that "assists" the sulfuryl group moving from the 3'-phosphoadenosine-5'-phosphosulfate molecule to threonine and stabilizes the transition state (TS) by electrostatic interactions. The residue Lys(158) keeps close the residues participating in the overall H-bond network, while Ser(285), Thr(81), and Thr(82) stabilize the TS via strong hydrogen interactions and contribute to lower the activation barrier. PMID:26108987

  20. Narrowing the gap in understanding protein structure and function through computer simulations

    NASA Astrophysics Data System (ADS)

    Guo, Hong

    2012-06-01

    Quantum mechanical/molecular mechanical (QM/MM) free energy simulations are applied for understanding mechanisms of different enzyme-catalyzed reactions and for determining some of the key factors in transition state (TS) stabilization and substrate specificity, two of the most important properties of enzymes. It is demonstrated here based on the results of computer simulations on kumamolisin-As, a member of sedolisin family, and DIM-5 and SET7/9, two of protein lysine methyltransferases, that transition state stabilization may be achieved in part through the general acid/base mechanism or the binding of the substrate in the TS-like configuration. Moreover, it is shown that dynamic substrate assisted catalysis may play an important role in the substrate specificity of enzymes.