Non-parametric Estimation of a Survival Function with Two-stage Design Studies.
Li, Gang; Tseng, Chi-Hong
2008-06-01
The two-stage design is popular in epidemiology studies and clinical trials due to its cost effectiveness. Typically, the first stage sample contains cheaper and possibly biased information, while the second stage validation sample consists of a subset of subjects with accurate and complete information. In this paper, we study estimation of a survival function with right-censored survival data from a two-stage design. A non-parametric estimator is derived by combining data from both stages. We also study its large sample properties and derive pointwise and simultaneous confidence intervals for the survival function. The proposed estimator effectively reduces the variance and finite-sample bias of the Kaplan-Meier estimator solely based on the second stage validation sample. Finally, we apply our method to a real data set from a medical device post-marketing surveillance study.
Structuring feature space: a non-parametric method for volumetric transfer function generation.
Maciejewski, Ross; Woo, Insoo; Chen, Wei; Ebert, David S
2009-01-01
The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial
A Non-parametric Approach to Constrain the Transfer Function in Reverberation Mapping
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min; Bai, Jin-Ming
2016-11-01
Broad emission lines of active galactic nuclei stem from a spatially extended region (broad-line region, BLR) that is composed of discrete clouds and photoionized by the central ionizing continuum. The temporal behaviors of these emission lines are blurred echoes of continuum variations (i.e., reverberation mapping, RM) and directly reflect the structures and kinematic information of BLRs through the so-called transfer function (also known as the velocity-delay map). Based on the previous works of Rybicki and Press and Zu et al., we develop an extended, non-parametric approach to determine the transfer function for RM data, in which the transfer function is expressed as a sum of a family of relatively displaced Gaussian response functions. Therefore, arbitrary shapes of transfer functions associated with complicated BLR geometry can be seamlessly included, enabling us to relax the presumption of a specified transfer function frequently adopted in previous studies and to let it be determined by observation data. We formulate our approach in a previously well-established framework that incorporates the statistical modeling of continuum variations as a damped random walk process and takes into account long-term secular variations which are irrelevant to RM signals. The application to RM data shows the fidelity of our approach.
Non-parametric estimation of gap time survival functions for ordered multivariate failure time data.
Schaubel, Douglas E; Cai, Jianwen
2004-06-30
Times between sequentially ordered events (gap times) are often of interest in biomedical studies. For example, in a cancer study, the gap times from incidence-to-remission and remission-to-recurrence may be examined. Such data are usually subject to right censoring, and within-subject failure times are generally not independent. Statistical challenges in the analysis of the second and subsequent gap times include induced dependent censoring and non-identifiability of the marginal distributions. We propose a non-parametric method for constructing one-sample estimators of conditional gap-time specific survival functions. The estimators are uniformly consistent and, upon standardization, converge weakly to a zero-mean Gaussian process, with a covariance function which can be consistently estimated. Simulation studies reveal that the asymptotic approximations are appropriate for finite samples. Methods for confidence bands are provided. The proposed methods are illustrated on a renal failure data set, where the probabilities of transplant wait-listing and kidney transplantation are of interest.
Cadarso-Suárez, Carmen; Roca-Pardiñas, Javier; Figueiras, Adolfo; González-Manteiga, Wenceslao
2005-04-30
The generalized additive, model (GAM) is a powerful and widely used tool that allows researchers to fit, non-parametrically, the effect of continuous predictors on a transformation of the mean response variable. Such a transformation is given by a so-called link function, and in GAMs this link function is assumed to be known. Nevertheless, if an incorrect choice is made for the link, the resulting GAM is misspecified and the results obtained may be misleading. In this paper, we propose a modified version of the local scoring algorithm that allows for the non-parametric estimation of the link function, by using local linear kernel smoothers. To better understand the effect that each covariate produces on the outcome, results are expressed in terms of the non-parametric odds ratio (OR) curves. Bootstrap techniques were used to correct the bias in the OR estimation and to construct point-wise confidence intervals. A simulation study was carried out to assess the behaviour of the resulting estimates. The proposed methodology was illustrated using data from the AIDS Register of Galicia (NW Spain), with a view to assessing the effect of the CD4 lymphocyte count on the probability of being AIDS-diagnosed via Tuberculosis (TB). This application shows how the link's flexibility makes it possible to obtain OR curve estimates that are less sensitive to the presence of outliers and unusual values that are often present in the extremes of the covariate distributions.
Scaling of preferential flow in biopores by parametric or non parametric transfer functions
NASA Astrophysics Data System (ADS)
Zehe, E.; Hartmann, N.; Klaus, J.; Palm, J.; Schroeder, B.
2009-04-01
finally assign the measured hydraulic capacities to these pores. By combining this population of macropores with observed data on soil hydraulic properties we obtain a virtual reality. Flow and transport is simulated for different rainfall forcings comparing two models, Hydrus 3d and Catflow. The simulated cumulative travel depths distributions for different forcings will be linked to the cumulative depth distribution of connected flow paths. The latter describes the fraction of connected paths - where flow resistance is always below a selected threshold that links the surface to a certain critical depth. Systematic variation of the average number of macropores and their depth distributions will show whether a clear link between the simulated travel depths distributions and the depth distribution of connected paths may be identified. The third essential step is to derive a non parametric transfer function that predicts travel depth distributions of tracers and on the long term pesticides based on easy-to-assess subsurface characteristics (mainly density and depth distribution of worm burrows, soil matrix properties), initial conditions and rainfall forcing. Such a transfer function is independent of scale ? as long as we stay in the same ensemble i.e. worm population and soil properties stay the same. Shipitalo, M.J. and Butt, K.R. (1999): Occupancy and geometrical properties of Lumbricus terrestris L. burrows affecting infiltration. Pedobiologia 43:782-794 Zehe E, and Fluehler H. (2001b): Slope scale distribution of flow patterns in soil profiles. J. Hydrol. 247: 116-132.
Gao, Feng; Manatunga, Amita K; Chen, Shande
2007-02-20
Often in many biomedical and epidemiologic studies, estimating hazards function is of interest. The Breslow's estimator is commonly used for estimating the integrated baseline hazard, but this estimator requires the functional form of covariate effects to be correctly specified. It is generally difficult to identify the true functional form of covariate effects in the presence of time-dependent covariates. To provide a complementary method to the traditional proportional hazard model, we propose a tree-type method which enables simultaneously estimating both baseline hazards function and the effects of time-dependent covariates. Our interest will be focused on exploring the potential data structures rather than formal hypothesis testing. The proposed method approximates the baseline hazards and covariate effects with step-functions. The jump points in time and in covariate space are searched via an algorithm based on the improvement of the full log-likelihood function. In contrast to most other estimating methods, the proposed method estimates the hazards function rather than integrated hazards. The method is applied to model the risk of withdrawal in a clinical trial that evaluates the anti-depression treatment in preventing the development of clinical depression. Finally, the performance of the method is evaluated by several simulation studies.
Kulmala, A; Tenhunen, M
2012-11-07
The signal of the dosimetric detector is generally dependent on the shape and size of the sensitive volume of the detector. In order to optimize the performance of the detector and reliability of the output signal the effect of the detector size should be corrected or, at least, taken into account. The response of the detector can be modelled using the convolution theorem that connects the system input (actual dose), output (measured result) and the effect of the detector (response function) by a linear convolution operator. We have developed the super-resolution and non-parametric deconvolution method for determination of the cylinder symmetric ionization chamber radial response function. We have demonstrated that the presented deconvolution method is able to determine the radial response for the Roos parallel plate ionization chamber with a better than 0.5 mm correspondence with the physical measures of the chamber. In addition, the performance of the method was proved by the excellent agreement between the output factors of the stereotactic conical collimators (4-20 mm diameter) measured by the Roos chamber, where the detector size is larger than the measured field, and the reference detector (diode). The presented deconvolution method has a potential in providing reference data for more accurate physical models of the ionization chamber as well as for improving and enhancing the performance of the detectors in specific dosimetric problems.
[Non-parametric estimation of survival function for recurrent events data].
González, Juan R; Peña, Edsel A
2004-01-01
Recurrent events when we deal with survival studies demand a different methodology from what is used in standard survival analysis. The main problem that we found when we make inference in these kind of studies is that the observations may not be independent. Thus, biased and inefficient estimators can be obtained if we do not take into account this fact. In the independent case, the interocurrence survival function can be estimated by the generalization of the limit product estimator (Peña et al. (2001)). However, if data are correlated, other models should be used such as frailty models or an estimator proposed by Wang and Chang (1999), that take into account the fact that interocurrence times were or not correlated. The aim of this paper has been the illustration of these approaches by using two real data sets.
Fujita, André; Takahashi, Daniel Y; Patriota, Alexandre G; Sato, João R
2014-12-10
Statistical inference of functional magnetic resonance imaging (fMRI) data is an important tool in neuroscience investigation. One major hypothesis in neuroscience is that the presence or not of a psychiatric disorder can be explained by the differences in how neurons cluster in the brain. Therefore, it is of interest to verify whether the properties of the clusters change between groups of patients and controls. The usual method to show group differences in brain imaging is to carry out a voxel-wise univariate analysis for a difference between the mean group responses using an appropriate test and to assemble the resulting 'significantly different voxels' into clusters, testing again at cluster level. In this approach, of course, the primary voxel-level test is blind to any cluster structure. Direct assessments of differences between groups at the cluster level seem to be missing in brain imaging. For this reason, we introduce a novel non-parametric statistical test called analysis of cluster structure variability (ANOCVA), which statistically tests whether two or more populations are equally clustered. The proposed method allows us to compare the clustering structure of multiple groups simultaneously and also to identify features that contribute to the differential clustering. We illustrate the performance of ANOCVA through simulations and an application to an fMRI dataset composed of children with attention deficit hyperactivity disorder (ADHD) and controls. Results show that there are several differences in the clustering structure of the brain between them. Furthermore, we identify some brain regions previously not described to be involved in the ADHD pathophysiology, generating new hypotheses to be tested. The proposed method is general enough to be applied to other types of datasets, not limited to fMRI, where comparison of clustering structures is of interest.
NON-PARAMETRIC ESTIMATION UNDER STRONG DEPENDENCE
Zhao, Zhibiao; Zhang, Yiyun; Li, Runze
2014-01-01
We study non-parametric regression function estimation for models with strong dependence. Compared with short-range dependent models, long-range dependent models often result in slower convergence rates. We propose a simple differencing-sequence based non-parametric estimator that achieves the same convergence rate as if the data were independent. Simulation studies show that the proposed method has good finite sample performance. PMID:25018572
NON-PARAMETRIC ESTIMATION UNDER STRONG DEPENDENCE.
Zhao, Zhibiao; Zhang, Yiyun; Li, Runze
2014-01-01
We study non-parametric regression function estimation for models with strong dependence. Compared with short-range dependent models, long-range dependent models often result in slower convergence rates. We propose a simple differencing-sequence based non-parametric estimator that achieves the same convergence rate as if the data were independent. Simulation studies show that the proposed method has good finite sample performance.
Non-parametric morphologies of mergers in the Illustris simulation
NASA Astrophysics Data System (ADS)
Bignone, L. A.; Tissera, P. B.; Sillero, E.; Pedrosa, S. E.; Pellizza, L. J.; Lambas, D. G.
2017-02-01
We study non-parametric morphologies of mergers events in a cosmological context, using the Illustris project. We produce mock g-band images comparable to observational surveys from the publicly available Illustris simulation idealized mock images at z = 0. We then measure non-parametric indicators: asymmetry, Gini, M20, clumpiness, and concentration for a set of galaxies with M* > 1010 M⊙. We correlate these automatic statistics with the recent merger history of galaxies and with the presence of close companions. Our main contribution is to assess in a cosmological framework, the empirically derived non-parametric demarcation line and average time-scales used to determine the merger rate observationally. We found that 98 per cent of galaxies above the demarcation line have a close companion or have experienced a recent merger event. On average, merger signatures obtained from the G-M20 criterion anti-correlate clearly with the elapsing time to the last merger event. We also find that the asymmetry correlates with galaxy pair separation and relative velocity, exhibiting the larger enhancements for those systems with pair separations d < 50 h-1 kpc and relative velocities V < 350 km s-1. We find that the G-M20 is most sensitive to recent mergers (∼0.14 Gyr) and to ongoing mergers with stellar mass ratios greater than 0.1. For this indicator, we compute a merger average observability time-scale of ∼0.2 Gyr, in agreement with previous results and demonstrate that the morphologically derived merger rate recovers the intrinsic total merger rate of the simulation and the merger rate as a function of stellar mass.
Marginally specified priors for non-parametric Bayesian estimation.
Kessler, David C; Hoff, Peter D; Dunson, David B
2015-01-01
Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables.
NASA Astrophysics Data System (ADS)
Conroy, Charlie; van Dokkum, Pieter G.; Villaume, Alexa
2017-03-01
It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å‑1) and cover a wide wavelength range (0.4–1.0 μm). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of ‑2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
Jung, S H; Su, J Q
1995-02-15
We propose a non-parametric method to calculate a confidence interval for the difference or ratio of two median failure times for paired observations with censoring. The new method is simple to calculate, does not involve non-parametric density estimates, and is valid asymptotically even when the two underlying distribution functions differ in shape. The method also allows missing observations. We report numerical studies to examine the performance of the new method for practical sample sizes.
Lottery spending: a non-parametric analysis.
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.
Lottery Spending: A Non-Parametric Analysis
Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody
2015-01-01
We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales. PMID:25642699
Non-parametric transformation for data correlation and integration: From theory to practice
Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon
1997-08-01
The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.
Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.
Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J
2010-01-01
We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.
Approximately Integrable Linear Statistical Models in Non-Parametric Estimation
1990-08-01
OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the
Non-parametric transient classification using adaptive wavelets
NASA Astrophysics Data System (ADS)
Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.
2015-11-01
Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.
Wey, Andrew; Connett, John; Rudser, Kyle
2015-07-01
For estimating conditional survival functions, non-parametric estimators can be preferred to parametric and semi-parametric estimators due to relaxed assumptions that enable robust estimation. Yet, even when misspecified, parametric and semi-parametric estimators can possess better operating characteristics in small sample sizes due to smaller variance than non-parametric estimators. Fundamentally, this is a bias-variance trade-off situation in that the sample size is not large enough to take advantage of the low bias of non-parametric estimation. Stacked survival models estimate an optimally weighted combination of models that can span parametric, semi-parametric, and non-parametric models by minimizing prediction error. An extensive simulation study demonstrates that stacked survival models consistently perform well across a wide range of scenarios by adaptively balancing the strengths and weaknesses of individual candidate survival models. In addition, stacked survival models perform as well as or better than the model selected through cross-validation. Finally, stacked survival models are applied to a well-known German breast cancer study.
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
A Non-parametric Bayesian Approach for Predicting RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Sato, Kengo; Hamada, Michiaki; Mituyama, Toutai; Asai, Kiyoshi; Sakakibara, Yasubumi
Since many functional RNAs form stable secondary structures which are related to their functions, RNA secondary structure prediction is a crucial problem in bioinformatics. We propose a novel model for generating RNA secondary structures based on a non-parametric Bayesian approach, called hierarchical Dirichlet processes for stochastic context-free grammars (HDP-SCFGs). Here non-parametric means that some meta-parameters, such as the number of non-terminal symbols and production rules, do not have to be fixed. Instead their distributions are inferred in order to be adapted (in the Bayesian sense) to the training sequences provided. The results of our RNA secondary structure predictions show that HDP-SCFGs are more accurate than the MFE-based and other generative models.
kdetrees: non-parametric estimation of phylogenetic tree distributions
Weyenberg, Grady; Huggins, Peter M.; Schardl, Christopher L.; Howe, Daniel K.; Yoshida, Ruriko
2014-01-01
Motivation: Although the majority of gene histories found in a clade of organisms are expected to be generated by a common process (e.g. the coalescent process), it is well known that numerous other coexisting processes (e.g. horizontal gene transfers, gene duplication and subsequent neofunctionalization) will cause some genes to exhibit a history distinct from those of the majority of genes. Such ‘outlying’ gene trees are considered to be biologically interesting, and identifying these genes has become an important problem in phylogenetics. Results: We propose and implement kdetrees, a non-parametric method for estimating distributions of phylogenetic trees, with the goal of identifying trees that are significantly different from the rest of the trees in the sample. Our method compares favorably with a similar recently published method, featuring an improvement of one polynomial order of computational complexity (to quadratic in the number of trees analyzed), with simulation studies suggesting only a small penalty to classification accuracy. Application of kdetrees to a set of Apicomplexa genes identified several unreliable sequence alignments that had escaped previous detection, as well as a gene independently reported as a possible case of horizontal gene transfer. We also analyze a set of Epichloë genes, fungi symbiotic with grasses, successfully identifying a contrived instance of paralogy. Availability and implementation: Our method for estimating tree distributions and identifying outlying trees is implemented as the R package kdetrees and is available for download from CRAN. Contact: ruriko.yoshida@uky.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24764459
Bayesian non parametric modelling of Higgs pair production
NASA Astrophysics Data System (ADS)
Scarpa, Bruno; Dorigo, Tommaso
2017-03-01
Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART) to describe the atoms in the Dirichlet process.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
Non-Parametric Collision Probability for Low-Velocity Encounters
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2007-01-01
An implicit, but not necessarily obvious, assumption in all of the current techniques for assessing satellite collision probability is that the relative position uncertainty is perfectly correlated in time. If there is any mis-modeling of the dynamics in the propagation of the relative position error covariance matrix, time-wise de-correlation of the uncertainty will increase the probability of collision over a given time interval. The paper gives some examples that illustrate this point. This paper argues that, for the present, Monte Carlo analysis is the best available tool for handling low-velocity encounters, and suggests some techniques for addressing the issues just described. One proposal is for the use of a non-parametric technique that is widely used in actuarial and medical studies. The other suggestion is that accurate process noise models be used in the Monte Carlo trials to which the non-parametric estimate is applied. A further contribution of this paper is a description of how the time-wise decorrelation of uncertainty increases the probability of collision.
Non-Parametric Bayesian Registration (NParBR) of Body Tumors in DCE-MRI Data.
Pilutti, David; Strumia, Maddalena; Buchert, Martin; Hadjidemetriou, Stathis
2016-04-01
The identification of tumors in the internal organs of chest, abdomen, and pelvis anatomic regions can be performed with the analysis of Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) data. The contrast agent is accumulated differently by pathologic and healthy tissues and that results in a temporally varying contrast in an image series. The internal organs are also subject to potentially extensive movements mainly due to breathing, heart beat, and peristalsis. This contributes to making the analysis of DCE-MRI datasets challenging as well as time consuming. To address this problem we propose a novel pairwise non-rigid registration method with a Non-Parametric Bayesian Registration (NParBR) formulation. The NParBR method uses a Bayesian formulation that assumes a model for the effect of the distortion on the joint intensity statistics, a non-parametric prior for the restored statistics, and also applies a spatial regularization for the estimated registration with Gaussian filtering. A minimally biased intra-dataset atlas is computed for each dataset and used as reference for the registration of the time series. The time series registration method has been tested with 20 datasets of liver, lungs, intestines, and prostate. It has been compared to the B-Splines and to the SyN methods with results that demonstrate that the proposed method improves both accuracy and efficiency.
Binary Classifier Calibration Using a Bayesian Non-Parametric Approach.
Naeini, Mahdi Pakdaman; Cooper, Gregory F; Hauskrecht, Milos
Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in Data mining. This paper presents two new non-parametric methods for calibrating outputs of binary classification models: a method based on the Bayes optimal selection and a method based on the Bayesian model averaging. The advantage of these methods is that they are independent of the algorithm used to learn a predictive model, and they can be applied in a post-processing step, after the model is learned. This makes them applicable to a wide variety of machine learning models and methods. These calibration methods, as well as other methods, are tested on a variety of datasets in terms of both discrimination and calibration performance. The results show the methods either outperform or are comparable in performance to the state-of-the-art calibration methods.
Non-parametric estimation of spatial variation in relative risk.
Kelsall, J E; Diggle, P J
We consider the problem of estimating the spatial variation in relative risks of two diseases, say, over a geographical region. Using an underlying Poisson point process model, we approach the problem as one of density ratio estimation implemented with a non-parametric kernel smoothing method. In order to assess the significance of any local peaks or troughs in the estimated risk surface, we introduce pointwise tolerance contours which can enhance a greyscale image plot of the estimate. We also propose a Monte Carlo test of the null hypothesis of constant risk over the whole region, to avoid possible over-interpretation of the estimated risk surface. We illustrate the capabilities of the methodology with two epidemiological examples.
A Bayesian non-parametric Potts model with application to pre-surgical FMRI data.
Johnson, Timothy D; Liu, Zhuqing; Bartsch, Andreas J; Nichols, Thomas E
2013-08-01
The Potts model has enjoyed much success as a prior model for image segmentation. Given the individual classes in the model, the data are typically modeled as Gaussian random variates or as random variates from some other parametric distribution. In this article, we present a non-parametric Potts model and apply it to a functional magnetic resonance imaging study for the pre-surgical assessment of peritumoral brain activation. In our model, we assume that the Z-score image from a patient can be segmented into activated, deactivated, and null classes, or states. Conditional on the class, or state, the Z-scores are assumed to come from some generic distribution which we model non-parametrically using a mixture of Dirichlet process priors within the Bayesian framework. The posterior distribution of the model parameters is estimated with a Markov chain Monte Carlo algorithm, and Bayesian decision theory is used to make the final classifications. Our Potts prior model includes two parameters, the standard spatial regularization parameter and a parameter that can be interpreted as the a priori probability that each voxel belongs to the null, or background state, conditional on the lack of spatial regularization. We assume that both of these parameters are unknown, and jointly estimate them along with other model parameters. We show through simulation studies that our model performs on par, in terms of posterior expected loss, with parametric Potts models when the parametric model is correctly specified and outperforms parametric models when the parametric model in misspecified.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Non-parametric Algorithm to Isolate Chunks in Response Sequences
Alamia, Andrea; Solopchuk, Oleg; Olivier, Etienne; Zenon, Alexandre
2016-01-01
Chunking consists in grouping items of a sequence into small clusters, named chunks, with the assumed goal of lessening working memory load. Despite extensive research, the current methods used to detect chunks, and to identify different chunking strategies, remain discordant and difficult to implement. Here, we propose a simple and reliable method to identify chunks in a sequence and to determine their stability across blocks. This algorithm is based on a ranking method and its major novelty is that it provides concomitantly both the features of individual chunk in a given sequence, and an overall index that quantifies the chunking pattern consistency across sequences. The analysis of simulated data confirmed the validity of our method in different conditions of noise, chunk lengths and chunk numbers; moreover, we found that this algorithm was particularly efficient in the noise range observed in real data, provided that at least 4 sequence repetitions were included in each experimental block. Furthermore, we applied this algorithm to actual reaction time series gathered from 3 published experiments and were able to confirm the findings obtained in the original reports. In conclusion, this novel algorithm is easy to implement, is robust to outliers and provides concurrent and reliable estimation of chunk position and chunking dynamics, making it useful to study both sequence-specific and general chunking effects. The algorithm is available at: https://github.com/artipago/Non-parametric-algorithm-to-isolate-chunks-in-response-sequences. PMID:27708565
Non-parametric reconstruction of cosmological matter perturbations
González, J.E.; Alcaniz, J.S.; Carvalho, J.C. E-mail: alcaniz@on.br
2016-04-01
Perturbative quantities, such as the growth rate (f) and index (γ), are powerful tools to distinguish different dark energy models or modified gravity theories even if they produce the same cosmic expansion history. In this work, without any assumption about the dynamics of the Universe, we apply a non-parametric method to current measurements of the expansion rate H(z) from cosmic chronometers and high-z quasar data and reconstruct the growth factor and rate of linearised density perturbations in the non-relativistic matter component. Assuming realistic values for the matter density parameter Ω{sub m0}, as provided by current CMB experiments, we also reconstruct the evolution of the growth index γ with redshift. We show that the reconstruction of current H(z) data constrains the growth index to γ=0.56 ± 0.12 (2σ) at z = 0.09, which is in full agreement with the prediction of the ΛCDM model and some of its extensions.
Non-parametric and least squares Langley plot methods
NASA Astrophysics Data System (ADS)
Kiedron, P. W.; Michalsky, J. J.
2016-01-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.
Non-parametric and least squares Langley plot methods
NASA Astrophysics Data System (ADS)
Kiedron, P. W.; Michalsky, J. J.
2015-04-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V=V>/i>0e-τ ·m, where a plot of ln (V) voltage vs. m air mass yields a straight line with intercept ln (V0). This ln (V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0)'s are smoothed and interpolated with median and mean moving window filters.
NASA Astrophysics Data System (ADS)
Takara, K. T.
2015-12-01
This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.
Parametric and non-parametric estimation of speech formants: application to infant cry.
Fort, A; Ismaelli, A; Manfredi, C; Bruscaglioni, P
1996-12-01
The present paper addresses the issue of correctly estimating the peaks in the speech envelope (formants) occurring in newborn infant cry. Clinical studies have shown that the analysis of such spectral characteristics is a helpful noninvasive diagnostic tool. In fact it can be applied to explore brain function at very early stage of child development, for a timely diagnosis of neonatal disease and malformation. The paper focuses on the performance comparison between some classical parametric and non-parametric estimation techniques particularly well suited for the present application, specifically the LP, ARX and cepstrum approaches. It is shown that, if the model order is correctly chosen, parametric methods are in general more reliable and robust against noise, but exhibit a less uniform behaviour than cepstrum. The methods are compared also in terms of tracking capability, since the signals under study are nonstationary. Both simulated and real signals are used in order to outline the relevant features of the proposed approaches.
A non-parametric model for the cosmic velocity field
NASA Astrophysics Data System (ADS)
Branchini, E.; Teodoro, L.; Frenk, C. S.; Schmoldt, I.; Efstathiou, G.; White, S. D. M.; Saunders, W.; Sutherland, W.; Rowan-Robinson, M.; Keeble, O.; Tadros, H.; Maddox, S.; Oliver, S.
1999-09-01
We present a self-consistent non-parametric model of the local cosmic velocity field derived from the distribution of IRAS galaxies in the PSCz redshift survey. The survey has been analysed using two independent methods, both based on the assumptions of gravitational instability and linear biasing. The two methods, which give very similar results, have been tested and calibrated on mock PSCz catalogues constructed from cosmological N-body simulations. The denser sampling provided by the PSCz survey compared with previous IRAS galaxy surveys allows an improved reconstruction of the density and velocity fields out to large distances. The most striking feature of the model velocity field is a coherent large-scale streaming motion along the baseline connecting Perseus-Pisces, the Local Supercluster, the Great Attractor and the Shapley Concentration. We find no evidence for back-infall on to the Great Attractor. Instead, material behind and around the Great Attractor is inferred to be streaming towards the Shapley Concentration, aided by the compressional push of two large nearby underdensities. The PSCz model velocities compare well with those predicted from the 1.2-Jy redshift survey of IRAS galaxies and, perhaps surprisingly, with those predicted from the distribution of Abell/ACO clusters, out to 140h^-1Mpc. Comparison of the real-space density fields (or, alternatively, the peculiar velocity fields) inferred from the PSCz and cluster catalogues gives a relative (linear) bias parameter between clusters and IRAS galaxies of b_c=4.4+/-0.6. Finally, we implement a likelihood analysis that uses all the available information on peculiar velocities in our local Universe to estimate beta_Omega 0 0.6 b_0.6 -0.15 +0.22 (1sigma), where b is the bias parameter for IRAS galaxies.
Non-parametric PSF estimation from celestial transit solar images using blind deconvolution
NASA Astrophysics Data System (ADS)
González, Adriana; Delouille, Véronique; Jacques, Laurent
2016-01-01
Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Rivera, Juan Pablo; Veroustraete, Frank; Muñoz-Marí, Jordi; Clevers, Jan G. P. W.; Camps-Valls, Gustau; Moreno, José
2015-10-01
Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to (ρ560 -ρ1610 -ρ2190) / (ρ560 +ρ1610 +ρ2190) with a 10-fold cross-validation RCV2 of 0.82 (RMSECV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a RCV2 of 0.90 (RMSECV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson's χ -square. It led to a R2 of 0.74 (RMSE: 0.80) against the validation dataset. While its validation went fast
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
The Dark Matter Profile of the Milky Way: A Non-parametric Reconstruction
NASA Astrophysics Data System (ADS)
Pato, Miguel; Iocco, Fabio
2015-04-01
We present the results of a new, non-parametric method to reconstruct the Galactic dark matter profile directly from observations. Using the latest kinematic data to track the total gravitational potential and the observed distribution of stars and gas to set the baryonic component, we infer the dark matter contribution to the circular velocity across the Galaxy. The radial derivative of this dynamical contribution is then estimated to extract the dark matter profile. The innovative feature of our approach is that it makes no assumption on the functional form or shape of the profile, thus allowing for a clean determination with no theoretical bias. We illustrate the power of the method by constraining the spherical dark matter profile between 2.5 and 25 kpc away from the Galactic center. The results show that the proposed method, free of widely used assumptions, can already be applied to pinpoint the dark matter distribution in the Milky Way with competitive accuracy, and paves the way for future developments.
The binned bispectrum estimator: template-based and non-parametric CMB non-Gaussianity searches
NASA Astrophysics Data System (ADS)
Bucher, Martin; Racine, Benjamin; van Tent, Bartjan
2016-05-01
We describe the details of the binned bispectrum estimator as used for the official 2013 and 2015 analyses of the temperature and polarization CMB maps from the ESA Planck satellite. The defining aspect of this estimator is the determination of a map bispectrum (3-point correlation function) that has been binned in harmonic space. For a parametric determination of the non-Gaussianity in the map (the so-called fNL parameters), one takes the inner product of this binned bispectrum with theoretically motivated templates. However, as a complementary approach one can also smooth the binned bispectrum using a variable smoothing scale in order to suppress noise and make coherent features stand out above the noise. This allows one to look in a model-independent way for any statistically significant bispectral signal. This approach is useful for characterizing the bispectral shape of the galactic foreground emission, for which a theoretical prediction of the bispectral anisotropy is lacking, and for detecting a serendipitous primordial signal, for which a theoretical template has not yet been put forth. Both the template-based and the non-parametric approaches are described in this paper.
Bayesian non-parametric approaches to reconstructing oscillatory systems and the Nyquist limit
NASA Astrophysics Data System (ADS)
Žurauskienė, Justina; Kirk, Paul; Thorne, Thomas; Stumpf, Michael P. H.
Reconstructing continuous signals from discrete time-points is a challenging inverse problem encountered in many scientific and engineering applications. For oscillatory signals classical results due to Nyquist set the limit below which it becomes impossible to reliably reconstruct the oscillation dynamics. Here we revisit this problem for vector-valued outputs and apply Bayesian non-parametric approaches in order to solve the function estimation problem. The main aim of the current paper is to map how we can use of correlations among different outputs to reconstruct signals at a sampling rate that lies below the Nyquist rate. We show that it is possible to use multiple-output Gaussian processes to capture dependences between outputs which facilitate reconstruction of signals in situation where conventional Gaussian processes (i.e. this aimed at describing scalar signals) fail, and we delineate the phase and frequency dependence of the reliability of this type of approach. In addition to simple toy-models we also consider the dynamics of the tumour suppressor gene p53, which exhibits oscillations under physiological conditions, and which can be reconstructed more reliably in our new framework.
Scene Parsing With Integration of Parametric and Non-Parametric Models
NASA Astrophysics Data System (ADS)
Shuai, Bing; Zuo, Zhen; Wang, Gang; Wang, Bing
2016-05-01
We adopt Convolutional Neural Networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks.
Scene Parsing With Integration of Parametric and Non-Parametric Models.
Shuai, Bing; Zuo, Zhen; Wang, Gang; Wang, Bing
2016-05-01
We adopt convolutional neural networks (CNNs) to be our parametric model to learn discriminative features and classifiers for local patch classification. Based on the occurrence frequency distribution of classes, an ensemble of CNNs (CNN-Ensemble) are learned, in which each CNN component focuses on learning different and complementary visual patterns. The local beliefs of pixels are output by CNN-Ensemble. Considering that visually similar pixels are indistinguishable under local context, we leverage the global scene semantics to alleviate the local ambiguity. The global scene constraint is mathematically achieved by adding a global energy term to the labeling energy function, and it is practically estimated in a non-parametric framework. A large margin-based CNN metric learning method is also proposed for better global belief estimation. In the end, the integration of local and global beliefs gives rise to the class likelihood of pixels, based on which maximum marginal inference is performed to generate the label prediction maps. Even without any post-processing, we achieve the state-of-the-art results on the challenging SiftFlow and Barcelona benchmarks.
Non-parametric estimators of a monotonic dose-response curve and bootstrap confidence intervals.
Dilleen, Maria; Heimann, Günter; Hirsch, Ian
2003-03-30
In this paper we consider study designs which include a placebo and an active control group as well as several dose groups of a new drug. A monotonically increasing dose-response function is assumed, and the objective is to estimate a dose with equivalent response to the active control group, including a confidence interval for this dose. We present different non-parametric methods to estimate the monotonic dose-response curve. These are derived from the isotonic regression estimator, a non-negative least squares estimator, and a bias adjusted non-negative least squares estimator using linear interpolation. The different confidence intervals are based upon an approach described by Korn, and upon two different bootstrap approaches. One of these bootstrap approaches is standard, and the second ensures that resampling is done from empiric distributions which comply with the order restrictions imposed. In our simulations we did not find any differences between the two bootstrap methods, and both clearly outperform Korn's confidence intervals. The non-negative least squares estimator yields biased results for moderate sample sizes. The bias adjustment for this estimator works well, even for small and moderate sample sizes, and surprisingly outperforms the isotonic regression method in certain situations.
THE DARK MATTER PROFILE OF THE MILKY WAY: A NON-PARAMETRIC RECONSTRUCTION
Pato, Miguel; Iocco, Fabio
2015-04-10
We present the results of a new, non-parametric method to reconstruct the Galactic dark matter profile directly from observations. Using the latest kinematic data to track the total gravitational potential and the observed distribution of stars and gas to set the baryonic component, we infer the dark matter contribution to the circular velocity across the Galaxy. The radial derivative of this dynamical contribution is then estimated to extract the dark matter profile. The innovative feature of our approach is that it makes no assumption on the functional form or shape of the profile, thus allowing for a clean determination with no theoretical bias. We illustrate the power of the method by constraining the spherical dark matter profile between 2.5 and 25 kpc away from the Galactic center. The results show that the proposed method, free of widely used assumptions, can already be applied to pinpoint the dark matter distribution in the Milky Way with competitive accuracy, and paves the way for future developments.
Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana
2015-03-01
The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.
A non-parametric approach to anomaly detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Veracini, Tiziana; Matteoli, Stefania; Diani, Marco; Corsini, Giovanni; de Ceglie, Sergio U.
2010-10-01
In the past few years, spectral analysis of data collected by hyperspectral sensors aimed at automatic anomaly detection has become an interesting area of research. In this paper, we are interested in an Anomaly Detection (AD) scheme for hyperspectral images in which spectral anomalies are defined with respect to a statistical model of the background Probability Density Function (PDF).The characterization of the PDF of hyperspectral imagery is not trivial. We approach the background PDF estimation through the Parzen Windowing PDF estimator (PW). PW is a flexible and valuable tool for accurately modeling unknown PDFs in a non-parametric fashion. Although such an approach is well known and has been widely employed, its use within an AD scheme has been not investigated yet. For practical purposes, the PW ability to estimate PDFs is strongly influenced by the choice of the bandwidth matrix, which controls the degree of smoothing of the resulting PDF approximation. Here, a Bayesian approach is employed to carry out the bandwidth selection. The resulting estimated background PDF is then used to detect spectral anomalies within a detection scheme based on the Neyman-Pearson approach. Real hyperspectral imagery is used for an experimental evaluation of the proposed strategy.
Non-parametric seismic hazard analysis in the presence of incomplete data
NASA Astrophysics Data System (ADS)
Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush
2017-01-01
The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.
Software For Computing Selected Functions
NASA Technical Reports Server (NTRS)
Grant, David C.
1992-01-01
Technical memorandum presents collection of software packages in Ada implementing mathematical functions used in science and engineering. Provides programmer with function support in Pascal and FORTRAN, plus support for extended-precision arithmetic and complex arithmetic. Valuable for testing new computers, writing computer code, or developing new computer integrated circuits.
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D.
2016-01-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. PMID:26993062
Xu, J L; Prorok, P C
1995-12-30
The goal of screening programmes for cancer is early detection and treatment with a consequent reduction in mortality from the disease. Screening programmes need to assess the true benefit of screening, that is, the length of time of extension of survival beyond the time of advancement of diagnosis (lead-time). This paper presents a non-parametric method to estimate the survival function of the post-lead-time survival (or extra survival time) of screen-detected cancer cases based on the observed total life time, namely, the sum of the lead-time and the extra survival time. We apply the method to the well-known data set of the HIP (Health Insurance Plan of Greater New York) breast cancer screening study. We make comparisons with the survival of other groups of cancer cases not detected by screening such as interval cases, cases among individuals who refused screening, and randomized control cases. As compared with Walter and Stitt's model, in which they made parametric assumptions for the extra survival time, our non-parametric method provides a better fit to HIP data in the sense that our estimator for the total survival time has a smaller sum of squares of residuals.
A non-parametric peak calling algorithm for DamID-Seq.
Li, Renhua; Hempel, Leonie U; Jiang, Tingbo
2015-01-01
Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.
Martínez-Camblor, Pablo
2017-02-01
Meta-analyses, broadly defined as the quantitative review and synthesis of the results of related but independent comparable studies, allow to know the state of the art of one considered topic. Since the amount of available bibliography has enhanced in almost all fields and, specifically, in biomedical research, its popularity has drastically increased during the last decades. In particular, different methodologies have been developed in order to perform meta-analytic studies of diagnostic tests for both fixed- and random-effects models. From a parametric point of view, these techniques often compute a bivariate estimation for the sensitivity and the specificity by using only one threshold per included study. Frequently, an overall receiver operating characteristic curve based on a bivariate normal distribution is also provided. In this work, the author deals with the problem of estimating an overall receiver operating characteristic curve from a fully non-parametric approach when the data come from a meta-analysis study i.e. only certain information about the diagnostic capacity is available. Both fixed- and random-effects models are considered. In addition, the proposed methodology lets to use the information of all cut-off points available (not only one of them) in the selected original studies. The performance of the method is explored through Monte Carlo simulations. The observed results suggest that the proposed estimator is better than the reference one when the reported information is related to a threshold based on the Youden index and when information for two or more points are provided. Real data illustrations are included.
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
Density Estimation Trees as fast non-parametric modelling tools
NASA Astrophysics Data System (ADS)
Anderlini, Lucio
2016-10-01
A Density Estimation Tree (DET) is a decision trees trained on a multivariate dataset to estimate the underlying probability density function. While not competitive with kernel techniques in terms of accuracy, DETs are incredibly fast, embarrassingly parallel and relatively small when stored to disk. These properties make DETs appealing in the resource- expensive horizon of the LHC data analysis. Possible applications may include selection optimization, fast simulation and fast detector calibration. In this contribution I describe the algorithm and its implementation made available to the HEP community as a RooFit object. A set of applications under discussion within the LHCb Collaboration are also briefly illustrated.
Parametric and Non-Parametric Vibration-Based Structural Identification Under Earthquake Excitation
NASA Astrophysics Data System (ADS)
Pentaris, Fragkiskos P.; Fouskitakis, George N.
2014-05-01
The problem of modal identification in civil structures is of crucial importance, and thus has been receiving increasing attention in recent years. Vibration-based methods are quite promising as they are capable of identifying the structure's global characteristics, they are relatively easy to implement and they tend to be time effective and less expensive than most alternatives [1]. This paper focuses on the off-line structural/modal identification of civil (concrete) structures subjected to low-level earthquake excitations, under which, they remain within their linear operating regime. Earthquakes and their details are recorded and provided by the seismological network of Crete [2], which 'monitors' the broad region of south Hellenic arc, an active seismic region which functions as a natural laboratory for earthquake engineering of this kind. A sufficient number of seismic events are analyzed in order to reveal the modal characteristics of the structures under study, that consist of the two concrete buildings of the School of Applied Sciences, Technological Education Institute of Crete, located in Chania, Crete, Hellas. Both buildings are equipped with high-sensitivity and accuracy seismographs - providing acceleration measurements - established at the basement (structure's foundation) presently considered as the ground's acceleration (excitation) and at all levels (ground floor, 1st floor, 2nd floor and terrace). Further details regarding the instrumentation setup and data acquisition may be found in [3]. The present study invokes stochastic, both non-parametric (frequency-based) and parametric methods for structural/modal identification (natural frequencies and/or damping ratios). Non-parametric methods include Welch-based spectrum and Frequency response Function (FrF) estimation, while parametric methods, include AutoRegressive (AR), AutoRegressive with eXogeneous input (ARX) and Autoregressive Moving-Average with eXogeneous input (ARMAX) models[4, 5
Program Computes Thermodynamic Functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1994-01-01
PAC91 is latest in PAC (Properties and Coefficients) series. Two principal features are to provide means of (1) generating theoretical thermodynamic functions from molecular constants and (2) least-squares fitting of these functions to empirical equations. PAC91 written in FORTRAN 77 to be machine-independent.
Symbolic functions from neural computation.
Smolensky, Paul
2012-07-28
Is thought computation over ideas? Turing, and many cognitive scientists since, have assumed so, and formulated computational systems in which meaningful concepts are encoded by symbols which are the objects of computation. Cognition has been carved into parts, each a function defined over such symbols. This paper reports on a research program aimed at computing these symbolic functions without computing over the symbols. Symbols are encoded as patterns of numerical activation over multiple abstract neurons, each neuron simultaneously contributing to the encoding of multiple symbols. Computation is carried out over the numerical activation values of such neurons, which individually have no conceptual meaning. This is massively parallel numerical computation operating within a continuous computational medium. The paper presents an axiomatic framework for such a computational account of cognition, including a number of formal results. Within the framework, a class of recursive symbolic functions can be computed. Formal languages defined by symbolic rewrite rules can also be specified, the subsymbolic computations producing symbolic outputs that simultaneously display central properties of both facets of human language: universal symbolic grammatical competence and statistical, imperfect performance.
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Phang, T.L.; Neville, M.C.; Rudolph, M.; Hunter, L.
2008-01-01
Trajectory clustering is a novel and statistically well-founded method for clustering time series data from gene expression arrays. Trajectory clustering uses non-parametric statistics and is hence not sensitive to the particular distributions underlying gene expression data. Each cluster is clearly defined in terms of direction of change of expression for successive time points (its ‘trajectory’), and therefore has easily appreciated biological meaning. Applying the method to a dataset from mouse mammary gland development, we demonstrate that it produces different clusters than Hierarchical, K-means, and Jackknife clustering methods, even when those methods are applied to differences between successive time points. Compared to all of the other methods, trajectory clustering was better able to match a manual clustering by a domain expert, and was better able to cluster groups of genes with known related functions. PMID:12603041
Automatic computation of transfer functions
Atcitty, Stanley; Watson, Luke Dale
2015-04-14
Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.
Bayesian inference for longitudinal data with non-parametric treatment effects.
Müller, Peter; Quintana, Fernando A; Rosner, Gary L; Maitland, Michael L
2014-04-01
We consider inference for longitudinal data based on mixed-effects models with a non-parametric Bayesian prior on the treatment effect. The proposed non-parametric Bayesian prior is a random partition model with a regression on patient-specific covariates. The main feature and motivation for the proposed model is the use of covariates with a mix of different data formats and possibly high-order interactions in the regression. The regression is not explicitly parameterized. It is implied by the random clustering of subjects. The motivating application is a study of the effect of an anticancer drug on a patient's blood pressure. The study involves blood pressure measurements taken periodically over several 24-h periods for 54 patients. The 24-h periods for each patient include a pretreatment period and several occasions after the start of therapy.
Lan, Ling; Datta, Somnath
2010-04-01
As a type of multivariate survival data, multistate models have a wide range of applications, notably in cancer and infectious disease progression studies. In this article, we revisit the problem of estimation of state occupation, entry and exit times in a multistate model where various estimators have been proposed in the past under a variety of parametric and non-parametric assumptions. We focus on two non-parametric approaches, one using a product limit formula as recently proposed in Datta and Sundaram(1) and a novel approach using a fractional risk set calculation followed by a subtraction formula to calculate the state occupation probability of a transient state. A numerical comparison between the two methods is presented using detailed simulation studies. We show that the new estimators have lower statistical errors of estimation of state occupation probabilities for the distant states. We illustrate the two methods using a pubertal development data set obtained from the NHANES III.(2).
A non-parametric approach to estimate the total deviation index for non-normal data.
Perez-Jaume, Sara; Carrasco, Josep L
2015-11-10
Concordance indices are used to assess the degree of agreement between different methods that measure the same characteristic. In this context, the total deviation index (TDI) is an unscaled concordance measure that quantifies to which extent the readings from the same subject obtained by different methods may differ with a certain probability. Common approaches to estimate the TDI assume data are normally distributed and linearity between response and effects (subjects, methods and random error). Here, we introduce a new non-parametric methodology for estimation and inference of the TDI that can deal with any kind of quantitative data. The present study introduces this non-parametric approach and compares it with the already established methods in two real case examples that represent situations of non-normal data (more specifically, skewed data and count data). The performance of the already established methodologies and our approach in these contexts is assessed by means of a simulation study.
System and Method of Use for Non-parametric Circular Autocorrelation for Signal Processing
2012-07-30
0012] Wald , A. and J. Wolfowitz , An exact test for randomness in the non–Parametric case based on serial correlation, Annals of Mathematical...Statistics Vol. 14, No. 4, pages 378–388, 1943, (hereinafter “ Wald and Wolfowitz ”) provides a non-parametric permutations method such that if n is...present disclosure models accurately and efficiently. 8 [0015] Wald and Wolfowitz generally describe the properties of hxxR , in the context
System Availability: Time Dependence and Statistical Inference by (Semi) Non-Parametric Methods
1988-08-01
Technical FROM -TO 1988 August T 42 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block...availability in finite time (not steady-state or long -run), and to non-parametric estimates. 20 DISTRIBUTION, AVAILABILITY OF ABSTRACT 21 ABSTRACT...productivity of commercial nuclear power plants; in that arena it is quantified by probabilistic risk assessment (PRA). Relaued finite state
Functional Programming in Computer Science
Anderson, Loren James; Davis, Marion Kei
2016-01-19
We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functional language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Non-parametric determination of H and He interstellar fluxes from cosmic-ray data
NASA Astrophysics Data System (ADS)
Ghelfi, A.; Barao, F.; Derome, L.; Maurin, D.
2016-06-01
Context. Top-of-atmosphere (TOA) cosmic-ray (CR) fluxes from satellites and balloon-borne experiments are snapshots of the solar activity imprinted on the interstellar (IS) fluxes. Given a series of snapshots, the unknown IS flux shape and the level of modulation (for each snapshot) can be recovered. Aims: We wish (i) to provide the most accurate determination of the IS H and He fluxes from TOA data alone; (ii) to obtain the associated modulation levels (and uncertainties) while fully accounting for the correlations with the IS flux uncertainties; and (iii) to inspect whether the minimal force-field approximation is sufficient to explain all the data at hand. Methods: Using H and He TOA measurements, including the recent high-precision AMS, BESS-Polar, and PAMELA data, we performed a non-parametric fit of the IS fluxes JISH,~He and modulation level φi for each data-taking period. We relied on a Markov chain Monte Carlo (MCMC) engine to extract the probability density function and correlations (hence the credible intervals) of the sought parameters. Results: Although H and He are the most abundant and best measured CR species, several datasets had to be excluded from the analysis because of inconsistencies with other measurements. From the subset of data passing our consistency cut, we provide ready-to-use best-fit and credible intervals for the H and He IS fluxes from MeV/n to PeV/n energy (with a relative precision in the range [ 2-10% ] at 1σ). Given the strong correlation between JIS and φi parameters, the uncertainties on JIS translate into Δφ ≈ ± 30 MV (at 1σ) for all experiments. We also find that the presence of 3He in He data biases φ towards higher φ values by ~30 MV. The force-field approximation, despite its limitation, gives an excellent (χ2/d.o.f. = 1.02) description of the recent high-precision TOA H and He fluxes. Conclusions: The analysis must be extended to different charge species and more realistic modulation models. It would benefit
Kerschbamer, Rudolf
2015-05-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.
Non-parametric trend analysis of water quality data of rivers in Kansas
Yu, Y.-S.; Zou, S.; Whittemore, D.
1993-01-01
Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.
Kerschbamer, Rudolf
2015-01-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure – the Equality Equivalence Test – that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity. PMID:26089571
Modeling a MEMS deformable mirror using non-parametric estimation techniques.
Guzmán, Dani; Juez, Francisco Javier de Cos; Myers, Richard; Guesalaga, Andrés; Lasheras, Fernando Sánchez
2010-09-27
Using non-parametric estimation techniques, we have modeled an area of 126 actuators of a micro-electro-mechanical deformable mirror with 1024 actuators. These techniques produce models applicable to open-loop adaptive optics, where the turbulent wavefront is measured before it hits the deformable mirror. The model's input is the wavefront correction to apply to the mirror and its output is the set of voltages to shape the mirror. Our experiments have achieved positioning errors of 3.1% rms of the peak-to-peak wavefront excursion.
FUNCTION GENERATOR FOR ANALOGUE COMPUTERS
Skramstad, H.K.; Wright, J.H.; Taback, L.
1961-12-12
An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)
NASA Astrophysics Data System (ADS)
Kalpathy-Cramer, Jayashree; Ozertem, Umut; Hersh, William; Fuss, Martin; Erdogmus, Deniz
2009-02-01
Radiation therapy is one of the most effective treatments used in the treatment of about half of all people with cancer. A critical goal in radiation therapy is to deliver optimal radiation doses to the perceived tumor while sparing the surrounding healthy tissues. Radiation oncologists often manually delineate normal and diseased structures on 3D-CT scans, a time consuming task. We present a segmentation algorithm using non-parametric snakes and principal curves that can be used in an automatic or semi-supervised fashion. It provides fast segmentation that is robust with respect to noisy edges and does not require the user to optimize a variety of parameters, unlike many segmentation algorithms. It allows multiple cues to be incorporated easily for the purposes of estimating the edge probability density. These cues, including texture, intensity and shape priors, can be used simultaneously to delineate tumors and normal anatomy, thereby increasing the robustness of the algorithm. The notion of principal curves is used to interpolate between data points in sparse areas. We compare the results using a non-parametric snake technique with a gold standard consisting of manually delineated structures for tumors as well as normal organs.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Measuring Dark Matter Profiles Non-Parametrically in Dwarf Spheroidals: An Application to Draco
NASA Astrophysics Data System (ADS)
Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Drory, Niv; Williams, Michael J.
2013-02-01
We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 <= r <= 700 pc. The profile for r >= 20 pc is well fit by a power law with slope α = -1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.
Non-parametrically Measuring Dark Matter Profiles in the Milky Way's Dwarf Spheroidals
NASA Astrophysics Data System (ADS)
Jardel, John; Gebhardt, K.
2013-01-01
The Milky Way's population of dwarf spheroidal (dSph) satellites has received much attention as a test site for the Cold Dark Matter (CDM) model for structure formation. Dynamical modeling, using the motions of the stars to trace the unknown mass distribution, is well-suited to test predictions of CDM by measuring the radial density profiles of the dark matter (DM) halos in which the dSphs reside. These studies reveal DM profiles with constant-density cores, in contrast to the cuspy profiles predicted from DM-only simulations. To resolve this discrepancy, many believe that feedback from baryons can alter the DM profiles and turn cusps into cores. Since it is difficult to simulate these complex baryonic processes with high fidelity, there are not many robust predictions for how feedback should affect the dSphs. We therefore do not know the type of DM profile to look for in these systems. This motivates a study to measure the DM profiles of dSphs non-parametrically to detect profiles other than the traditional cored and cuspy profiles most studies explore. I will present early results from a study using orbit-based models to non-parametrically measure the DM profiles of several of the bright Milky Way dSphs. The DM profiles measured will place observational constraints on the effects of feedback in low-mass galaxies.
Non-parametric estimation and doubly-censored data: general ideas and applications to AIDS.
Jewell, N P
In many epidemiologic studies of human immunodeficiency virus (HIV) disease, interest focuses on the distribution of the length of the interval of time between two events. In many such cases, statistical estimation of properties of this distribution is complicated by the fact that observation of the times of both events is subject to intervalcensoring so that the length of time between the events is never observed exactly. Following DeGruttola and Lagakos, we call such data doubly-censored. Jewell, Malani and Vittinghoff showed that, with certain assumptions and for a particular doubly-censored data structure, non-parametric maximum likelihood estimation of the interval length distribution is equivalent to non-parametric estimation of a mixing distribution. Here, we extend these ideas to various other kinds of doubly-censored data. We consider application of the methods to various studies generated by investigations into the natural history of HIV disease with particular attention given to estimation of the distribution of time between infection of an individual (an index case) and transmission of HIV to their sexual partner.
Kvist, Kajsa; Gerster, Mette; Andersen, Per Kragh; Kessing, Lars Vedel
2007-12-30
For recurrent events there is evidence that misspecification of the frailty distribution can cause severe bias in estimated regression coefficients (Am. J. Epidemiol 1998; 149:404-411; Statist. Med. 2006; 25:1672-1684). In this paper we adapt a procedure originally suggested in (Biometrika 1999; 86:381-393) for parallel data for checking the gamma frailty to recurrent events. To apply the model checking procedure, a consistent non-parametric estimator for the marginal gap time distributions is needed. This is in general not possible due to induced dependent censoring in the recurrent events setting, however, in (Biometrika 1999; 86:59-70) a non-parametric estimator for the joint gap time distributions based on the principle of inverse probability of censoring weights is suggested. Here, we attempt to apply this estimator in the model checking procedure and the performance of the method is investigated with simulations and applied to Danish registry data. The method is further investigated using the usual Kaplan-Meier estimator and a marginalized estimator for the marginal gap time distributions. We conclude that the procedure only works when the recurrent event is common and when the intra-individual association between gap times is weak.
NASA Astrophysics Data System (ADS)
Cotini, Stefano; Ripamonti, Emanuele; Caccianiga, Alessandro; Colpi, Monica; Della Ceca, Roberto; Mapelli, Michela; Severgnini, Paola; Segreto, Alberto
2013-05-01
We investigate the possible link between mergers and the enhanced activity of supermassive black holes (SMBHs) at the centre of galaxies, by comparing the merger fraction of a local sample (0.003 ≤ z < 0.03) of active galaxies - 59 active galactic nuclei host galaxies selected from the All-Sky Swift Burst Alert Telescope (BAT) Survey - with an appropriate control sample (247 sources extracted from the HyperLeda catalogue) that has the same redshift distribution as the BAT sample. We detect the interacting systems in the two samples on the basis of non-parametric structural indexes of concentration (C), asymmetry (A), clumpiness (S), Gini coefficient (G) and second-order momentum of light (M20). In particular, we propose a new morphological criterion, based on a combination of all these indexes, that improves the identification of interacting systems. We also present a new software - PyCASSo (PYTHON CAS software) - for the automatic computation of the structural indexes. After correcting for the completeness and reliability of the method, we find that the fraction of interacting galaxies among the active population (20{^{+ 7}_{- 5}} per cent) exceeds the merger fraction of the control sample (4{^{+ 1.7}_{- 1.2}} per cent). Choosing a mass-matched control sample leads to equivalent results, although with slightly lower statistical significance. Our findings support the scenario in which mergers trigger the nuclear activity of SMBHs.
Computational complexity of Boolean functions
NASA Astrophysics Data System (ADS)
Korshunov, Aleksei D.
2012-02-01
Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Popławski, Nikodem J.
2016-04-01
The coupling between spin and torsion in the Einstein-Cartan-Sciama-Kibble theory of gravity generates gravitational repulsion at very high densities, which prevents a singularity in a black hole and may create there a new universe. We show that quantum particle production in such a universe near the last bounce, which represents the Big Bang, gives the dynamics that solves the horizon, flatness, and homogeneity problems in cosmology. For a particular range of the particle production coefficient, we obtain a nearly constant Hubble parameter that gives an exponential expansion of the universe with more than 60 e-folds, which lasts about ∼10-42 s. This scenario can thus explain cosmic inflation without requiring a fundamental scalar field and reheating. From the obtained time dependence of the scale factor, we follow the prescription of Ellis and Madsen to reconstruct in a non-parametric way a scalar field potential which gives the same dynamics of the early universe. This potential gives the slow-roll parameters of cosmic inflation, from which we calculate the tensor-to-scalar ratio, the scalar spectral index of density perturbations, and its running as functions of the production coefficient. We find that these quantities do not significantly depend on the scale factor at the Big Bounce. Our predictions for these quantities are consistent with the Planck 2015 observations.
Titman, Andrew C
2014-07-01
A likelihood based approach to obtaining non-parametric estimates of the failure time distribution is developed for the copula based model of Wang et al. (Lifetime Data Anal 18:434-445, 2012) for current status data under dependent observation. Maximization of the likelihood involves a generalized pool-adjacent violators algorithm. The estimator coincides with the standard non-parametric maximum likelihood estimate under an independence model. Confidence intervals for the estimator are constructed based on a smoothed bootstrap. It is also shown that the non-parametric failure distribution is only identifiable if the copula linking the observation and failure time distributions is fully-specified. The method is illustrated on a previously analyzed tumorigenicity dataset.
Non-parametric analysis of LANDSAT maps using neural nets and parallel computers
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1991-01-01
Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.
The application of non-parametric statistical techniques to an ALARA programme.
Moon, J H; Cho, Y H; Kang, C S
2001-01-01
For the cost-effective reduction of occupational radiation dose (ORD) at nuclear power plants, it is necessary to identify what are the processes of repetitive high ORD during maintenance and repair operations. To identify the processes, the point values such as mean and median are generally used, but they sometimes lead to misjudgment since they cannot show other important characteristics such as dose distributions and frequencies of radiation jobs. As an alternative, the non-parametric analysis method is proposed, which effectively identifies the processes of repetitive high ORD. As a case study, the method is applied to ORD data of maintenance and repair processes at Kori Units 3 and 4 that are pressurised water reactors with 950 MWe capacity and have been operating since 1986 and 1987 respectively, in Korea and the method is demonstrated to be an efficient way of analysing the data.
Browning, Sharon R; Browning, Brian L
2015-09-03
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package.
Developing two non-parametric performance models for higher learning institutions
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat; Kashim, Rosmaini; Rahim, Rahela Abdul; Khan, Sahubar Ali Muhamed Nadhar
2016-08-01
Measuring the performance of higher learning Institutions (HLIs) is a must for these institutions to improve their excellence. This paper focuses on formation of two performance models: efficiency and effectiveness models by utilizing a non-parametric method, Data Envelopment Analysis (DEA). The proposed models are validated by measuring the performance of 16 public universities in Malaysia for year 2008. However, since data for one of the variables is unavailable, an estimate was used as a proxy to represent the real data. The results show that average efficiency and effectiveness scores were 0.817 and 0.900 respectively, while six universities were fully efficient and eight universities were fully effective. A total of six universities were both efficient and effective. It is suggested that the two proposed performance models would work as complementary methods to the existing performance appraisal method or as alternative methods in monitoring the performance of HLIs especially in Malaysia.
Assessing T cell clonal size distribution: a non-parametric approach.
Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V
2014-01-01
Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.
Browning, Sharon R.; Browning, Brian L.
2015-01-01
Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365
Zerzucha, Piotr; Boguszewska, Dominika; Zagdańska, Barbara; Walczak, Beata
2012-03-16
Spot detection is a mandatory step in all available software packages dedicated to the analysis of 2D gel images. As the majority of spots do not represent individual proteins, spot detection can obscure the results of data analysis significantly. This problem can be overcome by a pixel-level analysis of 2D images. Differences between the spot and the pixel-level approaches are demonstrated by variance analysis for real data sets (part of a larger research project initiated to investigate the molecular mechanism of the response of the potato to drought stress). As the method of choice for the analysis of data variation, the non-parametric MANOVA was chosen. NP-MANOVA is recommended as a flexible and very fast tool for the evaluation of the statistical significance of the factor(s) studied.
Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling.
Karsch, Kevin; Liu, Ce; Kang, Sing Bing
2014-11-01
We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.
Yaghotipoor, Anita; Farshadfar, E
2007-08-15
In order to determine phenotypic stability and contribution of yield components in the phenotypic stability of grain yield 21 genotypes of chickpea were evaluated in a randomized complete block design with three replications under rainfed and irrigated conditions in college of Agriculture, Razi University of Kermanshah, Iran, across 4 years. Non-parametric combined analysis of variance showed high significant differences for genotypes and genotype-environment interaction indicating the presence of genetic variation and possibility of selection for stable genotypes. The genotype number 8 (Filip92-9c) with minimum Si(2) and Si(2) of yield stability and grain yield in one parameter also revealed that genotype Filip92-9c was the most desirable variety for both yield and yield stability. Component analysis using Ci-value displayed that number of shrub per unit area has the most contribution on the grain yield phenotypic stability.
Bandeen-Roche, Karen; Ning, Jing
2008-03-01
Most research on the study of associations among paired failure times has either assumed time invariance or been based on complex measures or estimators. Little has accommodated competing risks. This paper targets the conditional cause-specific hazard ratio, henceforth called the cause-specific cross ratio, a recent modification of the conditional hazard ratio designed to accommodate competing risks data. Estimation is accomplished by an intuitive, non-parametric method that localizes Kendall's tau. Time variance is accommodated through a partitioning of space into 'bins' between which the strength of association may differ. Inferential procedures are developed, small-sample performance is evaluated and the methods are applied to the investigation of familial association in dementia onset.
Robust non-parametric tests for complex-repeated measures problems in ophthalmology.
Brombin, Chiara; Midena, Edoardo; Salmaso, Luigi
2013-12-01
The NonParametric Combination methodology (NPC) of dependent permutation tests allows the experimenter to face many complex multivariate testing problems and represents a convincing and powerful alternative to standard parametric methods. The main advantage of this approach lies in its flexibility in handling any type of variable (categorical and quantitative, with or without missing values) while at the same time taking dependencies among those variables into account without the need of modelling them. NPC methodology enables to deal with repeated measures, paired data, restricted alternative hypotheses, missing data (completely at random or not), high-dimensional and small sample size data. Hence, NPC methodology can offer a significant contribution to successful research in biomedical studies with several endpoints, since it provides reasonably efficient solutions and clear interpretations of inferential results. Pesarin F. Multivariate permutation tests: with application in biostatistics. Chichester-New York: John Wiley &Sons, 2001; Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester, UK: John Wiley &Sons, 2010. We focus on non-parametric permutation solutions to two real-case studies in ophthalmology, concerning complex-repeated measures problems. For each data set, different analyses are presented, thus highlighting characteristic aspects of the data structure itself. Our goal is to present different solutions to multivariate complex case studies, guiding researchers/readers to choose, from various possible interpretations of a problem, the one that has the highest flexibility and statistical power under a set of less stringent assumptions. MATLAB code has been implemented to carry out the analyses.
NASA Astrophysics Data System (ADS)
Gallego, A.; Benavent-Climent, A.; Romo-Melo, L.
2015-08-01
The paper proposes a new application of non-parametric statistical processing of signals recorded from vibration tests for damage detection and evaluation on I-section steel segments. The steel segments investigated constitute the energy dissipating part of a new type of hysteretic damper that is used for passive control of buildings and civil engineering structures subjected to earthquake-type dynamic loadings. Two I-section steel segments with different levels of damage were instrumented with piezoceramic sensors and subjected to controlled white noise random vibrations. The signals recorded during the tests were processed using two non-parametric methods (the power spectral density method and the frequency response function method) that had never previously been applied to hysteretic dampers. The appropriateness of these methods for quantifying the level of damage on the I-shape steel segments is validated experimentally. Based on the results of the random vibrations, the paper proposes a new index that predicts the level of damage and the proximity of failure of the hysteretic damper.
Metacognition: computation, biology and function
Fleming, Stephen M.; Dolan, Raymond J.; Frith, Christopher D.
2012-01-01
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape. PMID:22492746
Metacognition: computation, biology and function.
Fleming, Stephen M; Dolan, Raymond J; Frith, Christopher D
2012-05-19
Many complex systems maintain a self-referential check and balance. In animals, such reflective monitoring and control processes have been grouped under the rubric of metacognition. In this introductory article to a Theme Issue on metacognition, we review recent and rapidly progressing developments from neuroscience, cognitive psychology, computer science and philosophy of mind. While each of these areas is represented in detail by individual contributions to the volume, we take this opportunity to draw links between disciplines, and highlight areas where further integration is needed. Specifically, we cover the definition, measurement, neurobiology and possible functions of metacognition, and assess the relationship between metacognition and consciousness. We propose a framework in which level of representation, order of behaviour and access consciousness are orthogonal dimensions of the conceptual landscape.
NASA Astrophysics Data System (ADS)
Löw, Fabian; Conrad, Christopher; Michel, Ulrich
2015-10-01
This study addressed the classification of multi-temporal satellite data from RapidEye by considering different classifier algorithms and decision fusion. Four non-parametric classifier algorithms, decision tree (DT), random forest (RF), support vector machine (SVM), and multilayer perceptron (MLP), were applied to map crop types in various irrigated landscapes in Central Asia. A novel decision fusion strategy to combine the outputs of the classifiers was proposed. This approach is based on randomly selecting subsets of the input dataset and aggregating the probabilistic outputs of the base classifiers with another meta-classifier. During the decision fusion, the reliability of each base classifier algorithm was considered to exclude less reliable inputs at the class-basis. The spatial and temporal transferability of the classifiers was evaluated using data sets from four different agricultural landscapes with different spatial extents and from different years. A detailed accuracy assessment showed that none of the stand-alone classifiers was the single best performing. Despite the very good performance of the base classifiers, there was still up to 50% disagreement in the maps produced by the two single best classifiers, RF and SVM. The proposed fusion strategy, however, increased overall accuracies up to 6%. In addition, it was less sensitive to reduced training set sizes and produced more realistic land use maps with less speckle. The proposed fusion approach was better transferable to data sets from other years, i.e. resulted in higher accuracies for the investigated classes. The fusion approach is computationally efficient and appears well suited for mapping diverse crop categories based on sensors with a similar high repetition rate and spatial resolution like RapidEye, for instance the upcoming Sentinel-2 mission.
ERIC Educational Resources Information Center
Maydeu-Olivares, Albert
2005-01-01
Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…
Karathanasis, Nestoras; Tsamardinos, Ioannis
2016-01-01
Background The advance of omics technologies has made possible to measure several data modalities on a system of interest. In this work, we illustrate how the Non-Parametric Combination methodology, namely NPC, can be used for simultaneously assessing the association of different molecular quantities with an outcome of interest. We argue that NPC methods have several potential applications in integrating heterogeneous omics technologies, as for example identifying genes whose methylation and transcriptional levels are jointly deregulated, or finding proteins whose abundance shows the same trends of the expression of their encoding genes. Results We implemented the NPC methodology within “omicsNPC”, an R function specifically tailored for the characteristics of omics data. We compare omicsNPC against a range of alternative methods on simulated as well as on real data. Comparisons on simulated data point out that omicsNPC produces unbiased / calibrated p-values and performs equally or significantly better than the other methods included in the study; furthermore, the analysis of real data show that omicsNPC (a) exhibits higher statistical power than other methods, (b) it is easily applicable in a number of different scenarios, and (c) its results have improved biological interpretability. Conclusions The omicsNPC function competitively behaves in all comparisons conducted in this study. Taking into account that the method (i) requires minimal assumptions, (ii) it can be used on different studies designs and (iii) it captures the dependences among heterogeneous data modalities, omicsNPC provides a flexible and statistically powerful solution for the integrative analysis of different omics data. PMID:27812137
Non-parametric photic entrainment of Djungarian hamsters with different rhythmic phenotypes.
Schöttner, Konrad; Hauer, Jane; Weinert, Dietmar
To investigate the role of non-parametric light effects in entrainment, Djungarian hamsters of two different circadian phenotypes were exposed to skeleton photoperiods, or to light pulses at different circadian times, to compile phase response curves (PRCs). Wild-type (WT) hamsters show daily rhythms of locomotor activity in accord with the ambient light/dark conditions, with activity onset and offset strongly coupled to light-off and light-on, respectively. Hamsters of the delayed activity onset (DAO) phenotype, in contrast, progressively delay their activity onset, whereas activity offset remains coupled to light-on. The present study was performed to better understand the underlying mechanisms of this phenomenon. Hamsters of DAO and WT phenotypes were kept first under standard housing conditions with a 14:10 h light-dark cycle, and then exposed to skeleton photoperiods (one or two 15-min light pulses of 100 lx at the times of the former light-dark and/or dark-light transitions). In a second experiment, hamsters of both phenotypes were transferred to constant darkness and allowed to free-run until the lengths of the active (α) and resting (ρ) periods were equal (α:ρ = 1). At this point, animals were then exposed to light pulses (100 lx, 15 min) at different circadian times (CTs). Phase and period changes were estimated separately for activity onset and offset. When exposed to skeleton-photoperiods with one or two light pulses, the daily activity patterns of DAO and WT hamsters were similar to those obtained under conditions of a complete 14:10 h light-dark cycle. However, in the case of giving only one light pulse at the time of the former light-dark transition, animals temporarily free-ran until activity offset coincided with the light pulse. These results show that photic entrainment of the circadian activity rhythm is attained primarily via non-parametric mechanisms, with the "morning" light pulse being the essential cue. In the second experiment, typical
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series
Keshmiri, Soheil; Sumioka, Hidenobu; Yamazaki, Ryuji; Ishiguro, Hiroshi
2017-01-01
We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females. PMID:28217088
Non-parametric three-way mixed ANOVA with aligned rank tests.
Oliver-Rodríguez, Juan C; Wang, X T
2015-02-01
Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.
Trend Analysis of Golestan's Rivers Discharges Using Parametric and Non-parametric Methods
NASA Astrophysics Data System (ADS)
Mosaedi, Abolfazl; Kouhestani, Nasrin
2010-05-01
One of the major problems in human life is climate changes and its problems. Climate changes will cause changes in rivers discharges. The aim of this research is to investigate the trend analysis of seasonal and yearly rivers discharges of Golestan province (Iran). In this research four trend analysis method including, conjunction point, linear regression, Wald-Wolfowitz and Mann-Kendall, for analyzing of river discharges in seasonal and annual periods in significant level of 95% and 99% were applied. First, daily discharge data of 12 hydrometrics stations with a length of 42 years (1965-2007) were selected, after some common statistical tests such as, homogeneity test (by applying G-B and M-W tests), the four mentioned trends analysis tests were applied. Results show that in all stations, for summer data time series, there are decreasing trends with a significant level of 99% according to Mann-Kendall (M-K) test. For autumn time series data, all four methods have similar results. For other periods, the results of these four tests were more or less similar together. While, for some stations the results of tests were different. Keywords: Trend Analysis, Discharge, Non-parametric methods, Wald-Wolfowitz, The Mann-Kendall test, Golestan Province.
Non-parametric estimation of a time-dependent predictive accuracy curve.
Saha-Chaudhuri, P; Heagerty, P J
2013-01-01
A major biomedical goal associated with evaluating a candidate biomarker or developing a predictive model score for event-time outcomes is to accurately distinguish between incident cases from the controls surviving beyond t throughout the entire study period. Extensions of standard binary classification measures like time-dependent sensitivity, specificity, and receiver operating characteristic (ROC) curves have been developed in this context (Heagerty, P. J., and others, 2000. Time-dependent ROC curves for censored survival data and a diagnostic marker. Biometrics 56, 337-344). We propose a direct, non-parametric method to estimate the time-dependent Area under the curve (AUC) which we refer to as the weighted mean rank (WMR) estimator. The proposed estimator performs well relative to the semi-parametric AUC curve estimator of Heagerty and Zheng (2005. Survival model predictive accuracy and ROC curves. Biometrics 61, 92-105). We establish the asymptotic properties of the proposed estimator and show that the accuracy of markers can be compared very simply using the difference in the WMR statistics. Estimators of pointwise standard errors are provided.
A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series.
Keshmiri, Soheil; Sumioka, Hidenobu; Yamazaki, Ryuji; Ishiguro, Hiroshi
2017-01-01
We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.
Two non-parametric methods for derivation of constraints from radiotherapy dose-histogram data
NASA Astrophysics Data System (ADS)
Ebert, M. A.; Gulliford, S. L.; Buettner, F.; Foo, K.; Haworth, A.; Kennedy, A.; Joseph, D. J.; Denham, J. W.
2014-07-01
Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose-histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization.
Alternative methods of marginal abatement cost estimation: Non- parametric distance functions
Boyd, G.; Molburg, J.; Prince, R.
1996-12-31
This project implements a economic methodology to measure the marginal abatement costs of pollution by measuring the lost revenue implied by an incremental reduction in pollution. It utilizes observed performance, or `best practice`, of facilities to infer the marginal abatement cost. The initial stage of the project is to use data from an earlier published study on productivity trends and pollution in electric utilities to test this approach and to provide insights on its implementation to issues of cost-benefit analysis studies needed by the Department of Energy. The basis for this marginal abatement cost estimation is a relationship between the outputs and the inputs of a firm or plant. Given a fixed set of input resources, including quasi-fixed inputs like plant and equipment and variable inputs like labor and fuel, a firm is able to produce a mix of outputs. This paper uses this theoretical view of the joint production process to implement a methodology and obtain empirical estimates of marginal abatement costs. These estimates are compared to engineering estimates.
Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L
2013-02-01
The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies.
Revisiting the Distance Duality Relation using a non-parametric regression method
NASA Astrophysics Data System (ADS)
Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha
2016-07-01
The interdependence of luminosity distance, DL and angular diameter distance, DA given by the distance duality relation (DDR) is very significant in observational cosmology. It is very closely tied with the temperature-redshift relation of Cosmic Microwave Background (CMB) radiation. Any deviation from η(z)≡ DL/DA (1+z)2 =1 indicates a possible emergence of new physics. Our aim in this work is to check the consistency of these relations using a non-parametric regression method namely, LOESS with SIMEX. This technique avoids dependency on the cosmological model and works with a minimal set of assumptions. Further, to analyze the efficiency of the methodology, we simulate a dataset of 020 points of η (z) data based on a phenomenological model η(z)= (1+z)epsilon. The error on the simulated data points is obtained by using the temperature of CMB radiation at various redshifts. For testing the distance duality relation, we use the JLA SNe Ia data for luminosity distances, while the angular diameter distances are obtained from radio galaxies datasets. Since the DDR is linked with CMB temperature-redshift relation, therefore we also use the CMB temperature data to reconstruct η (z). It is important to note that with CMB data, we are able to study the evolution of DDR upto a very high redshift z = 2.418. In this analysis, we find no evidence of deviation from η=1 within a 1σ region in the entire redshift range used in this analysis (0 < z <= 2.418).
NASA Astrophysics Data System (ADS)
Hassani, Hossein; Huang, Xu; Gupta, Rangan; Ghodsi, Mansi
2016-10-01
In a recent paper, Gupta et al., (2015), analyzed whether sunspot numbers cause global temperatures based on monthly data covering the period 1880:1-2013:9. The authors find that standard time domain Granger causality test fails to reject the null hypothesis that sunspot numbers do not cause global temperatures for both full and sub-samples, namely 1880:1-1936:2, 1936:3-1986:11 and 1986:12-2013:9 (identified based on tests of structural breaks). However, frequency domain causality test detects predictability for the full-sample at short (2-2.6 months) cycle lengths, but not the sub-samples. But since, full-sample causality cannot be relied upon due to structural breaks, Gupta et al., (2015) conclude that the evidence of causality running from sunspot numbers to global temperatures is weak and inconclusive. Given the importance of the issue of global warming, our current paper aims to revisit this issue of whether sunspot numbers cause global temperatures, using the same data set and sub-samples used by Gupta et al., (2015), based on an nonparametric Singular Spectrum Analysis (SSA)-based causality test. Based on this test, we however, show that sunspot numbers have predictive ability for global temperatures for the three sub-samples, over and above the full-sample. Thus, generally speaking, our non-parametric SSA-based causality test outperformed both time domain and frequency domain causality tests and highlighted that sunspot numbers have always been important in predicting global temperatures.
NASA Astrophysics Data System (ADS)
Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.
2015-08-01
Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.
Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.
2009-01-01
Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Semi-parametric and non-parametric methods for clinical trials with incomplete data.
O'Brien, Peter C; Zhang, David; Bailey, Kent R
2005-02-15
Last observation carried forward (LOCF) and analysis using only data from subjects who complete a trial (Completers) are commonly used techniques for analysing data in clinical trials with incomplete data when the endpoint is change from baseline at last scheduled visit. We propose two alternative methods. The semi-parametric method, which cumulates changes observed between consecutive time points, is conceptually similar to the familiar life-table method and corresponding Kaplan-Meier estimation when the primary endpoint is time to event. A non-parametric analogue of LOCF is obtained by carrying forward, not the observed value, but the rank of the change from baseline at the last observation for each subject. We refer to this method as the LRCF method. Both procedures retain the simplicity of LOCF and Completers analyses and, like these methods, do not require data imputation or modelling assumptions. In the absence of any incomplete data they reduce to the usual two-sample tests. In simulations intended to reflect chronic diseases that one might encounter in practice, LOCF was observed to produce markedly biased estimates and markedly inflated type I error rates when censoring was unequal in the two treatment arms. These problems did not arise with the Completers, Cumulative Change, or LRCF methods. Cumulative Change and LRCF were more powerful than Completers, and the Cumulative Change test provided more efficient estimates than the Completers analysis, in all simulations. We conclude that the Cumulative Change and LRCF methods are preferable to LOCF and Completers analyses. Mixed model repeated measures (MMRM) performed similarly to Cumulative Change and LRCF and makes somewhat less restrictive assumptions about missingness mechanisms, so that it is also a reasonable alternative to LOCF and Completers analyses.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
A Non-parametric approach to measuring the K- pi+ amplitudes in D+ ---> K- K+ pi+ decay
Link, J.M.; Yager, P.M.; Anjos, J.C.; Bediaga, I.; Castromonte, C.; Machado, A.A.; Magnin, J.; Massafferri, A.; de Miranda, J.M.; Pepe, I.M.; Polycarpo, E.; /Rio de Janeiro, CBPF /CINVESTAV, IPN /Colorado U. /Fermilab /Frascati /Guanajuato U. /Illinois U., Urbana /Indiana U. /Korea U. /Kyungpook Natl. U. /INFN, Milan /Milan U.
2006-12-01
Using a large sample of D{sup +} {yields} K{sup -}K{sup +}{pi}{sup +} decays collected by the FOCUS photoproduction experiment at Fermilab, we present the first non-parametric analysis of the K{sup -}{pi}{sup +} amplitudes in D{sup +} {yields} K{sup -}K{sup +}{pi}{sup +} decay. The technique is similar to the technique used for our non-parametric measurements of the D{sup +} {yields} {bar K}*{sup 0} e{sup +}{nu} form factors. Although these results are in rough agreement with those of E687, we observe a wider S-wave contribution for the {bar K}*{sub 0}{sup 0}(1430) contribution than the standard, PDG [1] Breit-Wigner parameterization. We have some weaker evidence for the existence of a new, D-wave component at low values of the K{sup -}{pi}{sup +} mass.
A non-parametric approach to measuring the k- pi+ amplitudes in d+ --> k- k+ pi+ decay
Link, J.M.
2006-12-01
Using a large sample of D{sup +} {yields} K{sup -}K{sup +}{pi}{sup +} decays collected by the FOCUS photoproduction experiment at Fermilab, we present the first non-parametric analysis of the K{sup -} {pi}{sup +} amplitudes in D{sup +} {yields} K{sup -}K{sup +}{pi}{sup +} decay. The technique is similar to the technique used for our non-parametric measurements of the D{sup +} {yields} {bar K}*{sup 0} e{sup +}{nu} form factors. Although these results are in rough agreement with those of E687, we observe a wider S-wave contribution for the {bar K}*{sub 0}{sup 0}(1430) contribution than the standard, PDG [1] Breit-Wigner parameterization. We have some weaker evidence for the existence of a new, D-wave component at low values of the K{sup -} {pi}{sup +} mass.
Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie
2015-09-01
Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.
Ruiz-Sanchez, Eduardo
2015-12-01
The Neotropical woody bamboo genus Otatea is one of five genera in the subtribe Guaduinae. Of the eight described Otatea species, seven are endemic to Mexico and one is also distributed in Central and South America. Otatea acuminata has the widest geographical distribution of the eight species, and two of its recently collected populations do not match the known species morphologically. Parametric and non-parametric methods were used to delimit the species in Otatea using five chloroplast markers, one nuclear marker, and morphological characters. The parametric coalescent method and the non-parametric analysis supported the recognition of two distinct evolutionary lineages. Molecular clock estimates were used to estimate divergence times in Otatea. The results for divergence time in Otatea estimated the origin of the speciation events from the Late Miocene to Late Pleistocene. The species delimitation analyses (parametric and non-parametric) identified that the two populations of O. acuminata from Chiapas and Hidalgo are from two separate evolutionary lineages and these new species have morphological characters that separate them from O. acuminata s.s. The geological activity of the Trans-Mexican Volcanic Belt and the Isthmus of Tehuantepec may have isolated populations and limited the gene flow between Otatea species, driving speciation. Based on the results found here, I describe Otatea rzedowskiorum and Otatea victoriae as two new species, morphologically different from O. acuminata.
Evaluation of world's largest social welfare scheme: An assessment using non-parametric approach.
Singh, Sanjeet
2016-08-01
Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) is the world's largest social welfare scheme in India for the poverty alleviation through rural employment generation. This paper aims to evaluate and rank the performance of the states in India under MGNREGA scheme. A non-parametric approach, Data Envelopment Analysis (DEA) is used to calculate the overall technical, pure technical, and scale efficiencies of states in India. The sample data is drawn from the annual official reports published by the Ministry of Rural Development, Government of India. Based on three selected input parameters (expenditure indicators) and five output parameters (employment generation indicators), I apply both input and output oriented DEA models to estimate how well the states utilize their resources and generate outputs during the financial year 2013-14. The relative performance evaluation has been made under the assumption of constant returns and also under variable returns to scale to assess the impact of scale on performance. The results indicate that the main source of inefficiency is both technical and managerial practices adopted. 11 states are overall technically efficient and operate at the optimum scale whereas 18 states are pure technical or managerially efficient. It has been found that for some states it necessary to alter scheme size to perform at par with the best performing states. For inefficient states optimal input and output targets along with the resource savings and output gains are calculated. Analysis shows that if all inefficient states operate at optimal input and output levels, on an average 17.89% of total expenditure and a total amount of $780million could have been saved in a single year. Most of the inefficient states perform poorly when it comes to the participation of women and disadvantaged sections (SC&ST) in the scheme. In order to catch up with the performance of best performing states, inefficient states on an average need to enhance
Validation of two (parametric vs non-parametric) daily weather generators
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Skalak, P.
2015-12-01
As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed-like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30-years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series
On computation of Hough functions
NASA Astrophysics Data System (ADS)
Wang, Houjun; Boyd, John P.; Akmaev, Rashid A.
2016-04-01
Hough functions are the eigenfunctions of the Laplace tidal equation governing fluid motion on a rotating sphere with a resting basic state. Several numerical methods have been used in the past. In this paper, we compare two of those methods: normalized associated Legendre polynomial expansion and Chebyshev collocation. Both methods are not widely used, but both have some advantages over the commonly used unnormalized associated Legendre polynomial expansion method. Comparable results are obtained using both methods. For the first method we note some details on numerical implementation. The Chebyshev collocation method was first used for the Laplace tidal problem by Boyd (1976) and is relatively easy to use. A compact MATLAB code is provided for this method. We also illustrate the importance and effect of including a parity factor in Chebyshev polynomial expansions for modes with odd zonal wave numbers.
Neelakantan, S; Veng-Pedersen, P
2005-11-01
A novel numerical deconvolution method is presented that enables the estimation of drug absorption rates under time-variant disposition conditions. The method involves two components. (1) A disposition decomposition-recomposition (DDR) enabling exact changes in the unit impulse response (UIR) to be constructed based on centrally based clearance changes iteratively determined. (2) A non-parametric, end-constrained cubic spline (ECS) input response function estimated by cross-validation. The proposed DDR-ECS method compensates for disposition changes between the test and the reference administrations by using a "beta" clearance correction based on DDR analysis. The representation of the input response by the ECS method takes into consideration the complex absorption process and also ensures physiologically realistic approximations of the response. The stability of the new method to noisy data was evaluated by comprehensive simulations that considered different UIRs, various input functions, clearance changes and a novel scaling of the input function that includes the "flip-flop" absorption phenomena. The simulated input response was also analysed by two other methods and all three methods were compared for their relative performances. The DDR-ECS method provides better estimation of the input profile under significant clearance changes but tends to overestimate the input when there were only small changes in the clearance.
Frepoli, Cesare; Oriani, Luca
2006-07-01
In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95{sup th} percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)
Proficiency Scaling Based on Conditional Probability Functions for Attributes
1993-10-01
4.1 Non-parametric regression estimates as probability functions for attributes Non- parametric estimation of the unknown density function f from a plot...as construction of confidence intervals for PFAs and further improvement of non- parametric estimation methods are not discussed in this paper. The... parametric estimation of PFAs will be illustrated with the attribute mastery patterns of SAT M Section 4. In the next section, analysis results will be
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
Jewell, Nicholas P; Lei, Xiudong; Ghani, Azra C; Donnelly, Christl A; Leung, Gabriel M; Ho, Lai-Ming; Cowling, Benjamin J; Hedley, Anthony J
2007-04-30
For diseases with some level of associated mortality, the case fatality ratio measures the proportion of diseased individuals who die from the disease. In principle, it is straightforward to estimate this quantity from individual follow-up data that provides times from onset to death or recovery. In particular, in a competing risks context, the case fatality ratio is defined by the limiting value of the sub-distribution function, F(1)(t) = Pr(T
Computer Games Functioning as Motivation Stimulants
ERIC Educational Resources Information Center
Lin, Grace Hui Chin; Tsai, Tony Kung Wan; Chien, Paul Shih Chieh
2011-01-01
Numerous scholars have recommended computer games can function as influential motivation stimulants of English learning, showing benefits as learning tools (Clarke and Dede, 2007; Dede, 2009; Klopfer and Squire, 2009; Liu and Chu, 2010; Mitchell, Dede & Dunleavy, 2009). This study aimed to further test and verify the above suggestion,…
NASA Astrophysics Data System (ADS)
Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.
2016-07-01
In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.
Andersson, Jesper L R; Graham, Mark S; Zsoldos, Enikő; Sotiropoulos, Stamatios N
2016-11-01
Despite its great potential in studying brain anatomy and structure, diffusion magnetic resonance imaging (dMRI) is marred by artefacts more than any other commonly used MRI technique. In this paper we present a non-parametric framework for detecting and correcting dMRI outliers (signal loss) caused by subject motion. Signal loss (dropout) affecting a whole slice, or a large connected region of a slice, is frequently observed in diffusion weighted images, leading to a set of unusable measurements. This is caused by bulk (subject or physiological) motion during the diffusion encoding part of the imaging sequence. We suggest a method to detect slices affected by signal loss and replace them by a non-parametric prediction, in order to minimise their impact on subsequent analysis. The outlier detection and replacement, as well as correction of other dMRI distortions (susceptibility-induced distortions, eddy currents (EC) and subject motion) are performed within a single framework, allowing the use of an integrated approach for distortion correction. Highly realistic simulations have been used to evaluate the method with respect to its ability to detect outliers (types 1 and 2 errors), the impact of outliers on retrospective correction of movement and distortion and the impact on estimation of commonly used diffusion tensor metrics, such as fractional anisotropy (FA) and mean diffusivity (MD). Data from a large imaging project studying older adults (the Whitehall Imaging sub-study) was used to demonstrate the utility of the method when applied to datasets with severe subject movement. The results indicate high sensitivity and specificity for detecting outliers and that their deleterious effects on FA and MD can be almost completely corrected.
Distributed Non-Parametric Representations for Vital Filtering: UW at TREC KBA 2014
2014-11-01
Large-scale cross-document coreference using distributed inference and hierarchical models. In Association for Computational Linguistics ( ACL ), 2011...and Bengio, Yoshua. Word repre- sentations: A simple and general method for semisupervised learning. In ACL , pp. 384–394, 2010. Wang, Jingang, Song
2008-06-01
December 2006. 7. M. Fritz, B. Leibe, B. Caputo, and B. Schiele . Integrating representative and discriminant models for object category detection. 8... Schiele . Robust object detection with interleaved categorization and segmentation. Int. J. of Computer Vision, 2007 (in press). 18. D. G. Lowe. Distinctive
The emerging discipline of Computational Functional Anatomy
Miller, Michael I.; Qiu, Anqi
2010-01-01
Computational Functional Anatomy (CFA) is the study of functional and physiological response variables in anatomical coordinates. For this we focus on two things: (i) the construction of bijections (via diffeomorphisms) between the coordinatized manifolds of human anatomy, and (ii) the transfer (group action and parallel transport) of functional information into anatomical atlases via these bijections. We review advances in the unification of the bijective comparison of anatomical submanifolds via point-sets including points, curves and surface triangulations as well as dense imagery. We examine the transfer via these bijections of functional response variables into anatomical coordinates via group action on scalars and matrices in DTI as well as parallel transport of metric information across multiple templates which preserves the inner product. PMID:19103297
Analysis of Ventricular Function by Computed Tomography
Rizvi, Asim; Deaño, Roderick C.; Bachman, Daniel P.; Xiong, Guanglei; Min, James K.; Truong, Quynh A.
2014-01-01
The assessment of ventricular function, cardiac chamber dimensions and ventricular mass is fundamental for clinical diagnosis, risk assessment, therapeutic decisions, and prognosis in patients with cardiac disease. Although cardiac computed tomography (CT) is a noninvasive imaging technique often used for the assessment of coronary artery disease, it can also be utilized to obtain important data about left and right ventricular function and morphology. In this review, we will discuss the clinical indications for the use of cardiac CT for ventricular analysis, review the evidence on the assessment of ventricular function compared to existing imaging modalities such cardiac MRI and echocardiography, provide a typical cardiac CT protocol for image acquisition and post-processing for ventricular analysis, and provide step-by-step instructions to acquire multiplanar cardiac views for ventricular assessment from the standard axial, coronal, and sagittal planes. Furthermore, both qualitative and quantitative assessments of ventricular function as well as sample reporting are detailed. PMID:25576407
New Computer Simulations of Macular Neural Functioning
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Doshay, D.; Linton, S.; Parnas, B.; Montgomery, K.; Chimento, T.
1994-01-01
We use high performance graphics workstations and supercomputers to study the functional significance of the three-dimensional (3-D) organization of gravity sensors. These sensors have a prototypic architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scaled-up, 3-D versions run on a Cray Y-MP supercomputer. A semi-automated method of reconstruction of neural tissue from serial sections studied in a transmission electron microscope has been developed to eliminate tedious conventional photography. The reconstructions use a mesh as a step in generating a neural surface for visualization. Two meshes are required to model calyx surfaces. The meshes are connected and the resulting prisms represent the cytoplasm and the bounding membranes. A finite volume analysis method is employed to simulate voltage changes along the calyx in response to synapse activation on the calyx or on calyceal processes. The finite volume method insures that charge is conserved at the calyx-process junction. These and other models indicate that efferent processes act as voltage followers, and that the morphology of some afferent processes affects their functioning. In a final application, morphological information is symbolically represented in three dimensions in a computer. The possible functioning of the connectivities is tested using mathematical interpretations of physiological parameters taken from the literature. Symbolic, 3-D simulations are in progress to probe the functional significance of the connectivities. This research is expected to advance computer-based studies of macular functioning and of synaptic plasticity.
Computer network defense through radial wave functions
NASA Astrophysics Data System (ADS)
Malloy, Ian J.
The purpose of this research is to synthesize basic and fundamental findings in quantum computing, as applied to the attack and defense of conventional computer networks. The concept focuses on uses of radio waves as a shield for, and attack against traditional computers. A logic bomb is analogous to a landmine in a computer network, and if one was to implement it as non-trivial mitigation, it will aid computer network defense. As has been seen in kinetic warfare, the use of landmines has been devastating to geopolitical regions in that they are severely difficult for a civilian to avoid triggering given the unknown position of a landmine. Thus, the importance of understanding a logic bomb is relevant and has corollaries to quantum mechanics as well. The research synthesizes quantum logic phase shifts in certain respects using the Dynamic Data Exchange protocol in software written for this work, as well as a C-NOT gate applied to a virtual quantum circuit environment by implementing a Quantum Fourier Transform. The research focus applies the principles of coherence and entanglement from quantum physics, the concept of expert systems in artificial intelligence, principles of prime number based cryptography with trapdoor functions, and modeling radio wave propagation against an event from unknown parameters. This comes as a program relying on the artificial intelligence concept of an expert system in conjunction with trigger events for a trapdoor function relying on infinite recursion, as well as system mechanics for elliptic curve cryptography along orbital angular momenta. Here trapdoor both denotes the form of cipher, as well as the implied relationship to logic bombs.
Computational functions in biochemical reaction networks.
Arkin, A; Ross, J
1994-01-01
In prior work we demonstrated the implementation of logic gates, sequential computers (universal Turing machines), and parallel computers by means of the kinetics of chemical reaction mechanisms. In the present article we develop this subject further by first investigating the computational properties of several enzymatic (single and multiple) reaction mechanisms: we show their steady states are analogous to either Boolean or fuzzy logic gates. Nearly perfect digital function is obtained only in the regime in which the enzymes are saturated with their substrates. With these enzymatic gates, we construct combinational chemical networks that execute a given truth-table. The dynamic range of a network's output is strongly affected by "input/output matching" conditions among the internal gate elements. We find a simple mechanism, similar to the interconversion of fructose-6-phosphate between its two bisphosphate forms (fructose-1,6-bisphosphate and fructose-2,6-bisphosphate), that functions analogously to an AND gate. When the simple model is supplanted with one in which the enzyme rate laws are derived from experimental data, the steady state of the mechanism functions as an asymmetric fuzzy aggregation operator with properties akin to a fuzzy AND gate. The qualitative behavior of the mechanism does not change when situated within a large model of glycolysis/gluconeogenesis and the TCA cycle. The mechanism, in this case, switches the pathway's mode from glycolysis to gluconeogenesis in response to chemical signals of low blood glucose (cAMP) and abundant fuel for the TCA cycle (acetyl coenzyme A). Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 7 FIGURE 10 FIGURE 12 FIGURE 13 FIGURE 14 FIGURE 15 FIGURE 16 PMID:7948674
Algorithms for Computing the Lag Function.
1981-03-27
and S. J. Giner Subject: Algorithms for Computing the Lag Function References: See p . 27 Abstract: This memorandum provides a scheme for the numerical...highly oscillatory, and with singularities at the end points. j -3- 27 March 1981 GHP:SJG:Ihz TABLE OF CONTENTS P age Abstract...0 -9 16 -9 1) 1 11 1 1 -8 3 -1 -t I -8 8 -1 -1 1i 1 2 -6 2 1 1 2 -6 2 1 1 1 3 -3 -1 1 3 -3 -1 1i 1 4 1 1 4 1 -10- 27 March 1981 (1- P : SJG: 1hz The
Non-parametric estimation of relative risk in survival and associated tests.
Wakounig, Samo; Heinze, Georg; Schemper, Michael
2015-12-01
We extend the Tarone and Ware scheme of weighted log-rank tests to cover the associated weighted Mantel-Haenszel estimators of relative risk. Weighting functions previously employed are critically reviewed. The notion of an average hazard ratio is defined and its connection to the effect size measure P(Y > X) is emphasized. The connection makes estimation of P(Y > X) possible also under censoring. Two members of the extended Tarone-Ware scheme accomplish the estimation of intuitively interpretable average hazard ratios, also under censoring and time-varying relative risk which is achieved by an inverse probability of censoring weighting. The empirical properties of the members of the extended Tarone-Ware scheme are demonstrated by a Monte Carlo study. The differential role of the weighting functions considered is illustrated by a comparative analysis of four real data sets.
Propensity score method: a non-parametric technique to reduce model dependence
2017-01-01
Propensity score analysis (PSA) is a powerful technique that it balances pretreatment covariates, making the causal effect inference from observational data as reliable as possible. The use of PSA in medical literature has increased exponentially in recent years, and the trend continue to rise. The article introduces rationales behind PSA, followed by illustrating how to perform PSA in R with MatchIt package. There are a variety of methods available for PS matching such as nearest neighbors, full matching, exact matching and genetic matching. The task can be easily done by simply assigning a string value to the method argument in the matchit() function. The generic summary() and plot() functions can be applied to an object of class matchit to check covariate balance after matching. Furthermore, there is a useful package PSAgraphics that contains several graphical functions to check covariate balance between treatment groups across strata. If covariate balance is not achieved, one can modify model specifications or use other techniques such as random forest and recursive partitioning to better represent the underlying structure between pretreatment covariates and treatment assignment. The process can be repeated until the desirable covariate balance is achieved. PMID:28164092
Sazykina, Tatiana G; Kryshev, A I; Sanina, K D
2009-11-01
Databases on effects of chronic low-LET radiation exposure were analyzed by non-parametric statistical methods, to estimate the threshold dose rates above which radiation effects can be expected in vertebrate organisms. Data were grouped under three umbrella endpoints: effects on morbidity, reproduction, and life shortening. The data sets were compiled on a simple 'yes' or 'no' basis. Each data set included dose rates at which effects were reported without further details about the size or peculiarity of the effects. In total, the data sets include 84 values for endpoint "morbidity", 77 values for reproduction, and 41 values for life shortening. The dose rates in each set were ranked from low to higher values. The threshold TDR5 for radiation effects of a given umbrella type was estimated as a dose rate below which only a small percentage (5%) of data reported statistically significant radiation effects. The statistical treatment of the data sets was performed using non-parametric order statistics, and the bootstrap method. The resulting thresholds estimated by the order statistics are for morbidity effects 8.1 x 10(-4) Gy day(-1) (2.0 x 10(-4)-1.0 x 10(-3)), reproduction effects 6.0 x 10(-4) Gy day(-1) (4.0 x 10(-4)-1.5 x 10(-3)), and life shortening 3.0 x 10(-3) Gy day(-1) (1.0 x 10(-3)-6.0 x 10(-3)), respectively. The bootstrap method gave slightly lower values: 2.1 x 10(-4) Gy day(-1) (1.4 x 10(-4)-3.2 x 10(-4)) (morbidity), 4.1 x 10(-4) Gy day(-1) (3.0 x 10(-4)-5.7 x 10(-4)) (reproduction), and 1.1 x 10(-3) Gy day(-1) (7.9 x 10(-4)-1.3 x 10(-3)) (life shortening), respectively. The generic threshold dose rate (based on all umbrella types of effects) was estimated at 1.0 x 10(-3) Gy day(-1).
Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A
2015-05-01
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories.
Out-of-Sample Extensions for Non-Parametric Kernel Methods.
Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang
2017-02-01
Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.
Mathematical models for non-parametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).
Chiou, Jeng-Min; Liang, Kung-Yee; Chiu, Yen-Feng
2005-01-01
Multipoint linkage analysis using sibpair designs remains a common approach to help investigators to narrow chromosomal regions for traits (either qualitative or quantitative) of interest. Despite its popularity, the success of this approach depends heavily on how issues such as genetic heterogeneity, gene-gene, and gene-environment interactions are properly handled. If addressed properly, the likelihood of detecting genetic linkage and of efficiently estimating the location of the trait locus would be enhanced, sometimes drastically. Previously, we have proposed an approach to deal with these issues by modeling the genetic effect of the target trait locus as a function of covariates pertained to the sibpairs. Here the genetic effect is simply the probability that a sibpair shares the same allele at the trait locus from their parents. Such modeling helps to divide the sibpairs into more homogeneous subgroups, which in turn helps to enhance the chance to detect linkage. One limitation of this approach is the need to categorize the covariates so that a small and fixed number of genetic effect parameters are introduced. In this report, we take advantage of the fact that nowadays multiple markers are readily available for genotyping simultaneously. This suggests that one could estimate the dependence of the generic effect on the covariates nonparametrically. We present an iterative procedure to estimate (1) the genetic effect nonparametrically and (2) the location of the trait locus through estimating functions developed by Liang et al. ([2001a] Hum Hered 51:67-76). We apply this new method to the linkage study of schizophrenia to illustrate how the onset ages of each sibpair may help to address the issue of genetic heterogeneity. This analysis sheds new light on the dependence of the trait effect on onset ages from affected sibpairs, an observation not revealed previously. In addition, we have carried out some simulation work, which suggests that this method provides
Non-parametric Bayesian graph models reveal community structure in resting state fMRI.
Andersen, Kasper Winther; Madsen, Kristoffer H; Siebner, Hartwig Roman; Schmidt, Mikkel N; Mørup, Morten; Hansen, Lars Kai
2014-10-15
Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite Diagonal Model (IDM). The models define probabilities of generating links within and between clusters and the difference between the models lies in the restrictions they impose upon the between-cluster link probabilities. IRM is the most flexible model with no restrictions on the probabilities of links between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background noise probability. These probabilistic models are compared against three other approaches for node clustering, namely Infomap, Louvain modularity, and hierarchical clustering. Using 3 different datasets comprising healthy volunteers' rs-fMRI we found that the BCD model was in general the most predictive and reproducible model. This suggests that rs-fMRI data exhibits community structure and furthermore points to the significance of modeling heterogeneous between-cluster link probabilities.
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; ...
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions,more » the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.« less
Koohbor, Behrad; Kidane, Addis; Lu, Wei -Yang; Sutton, Michael A.
2016-01-25
Dynamic stress–strain response of rigid closed-cell polymeric foams is investigated in this work by subjecting high toughness polyurethane foam specimens to direct impact with different projectile velocities and quantifying their deformation response with high speed stereo-photography together with 3D digital image correlation. The measured transient displacement field developed in the specimens during high stain rate loading is used to calculate the transient axial acceleration field throughout the specimen. A simple mathematical formulation based on conservation of mass is also proposed to determine the local change of density in the specimen during deformation. By obtaining the full-field acceleration and density distributions, the inertia stresses at each point in the specimen are determined through a non-parametric analysis and superimposed on the stress magnitudes measured at specimen ends to obtain the full-field stress distribution. Furthermore, the process outlined above overcomes a major challenge in high strain rate experiments with low impedance polymeric foam specimens, i.e. the delayed equilibrium conditions can be quantified.
NASA Astrophysics Data System (ADS)
Petrosian, Vahe
2016-07-01
We have developed an inversion method for determination of the characteristics of the acceleration mechanism directly and non-parametrically from observations, in contrast to the usual forward fitting of parametric model variables to observations. This is done in the frame work of the so-called leaky box model of acceleration, valid for isotropic momentum distribution and for volume integrated characteristics in a finite acceleration site. We consider both acceleration by shocks and stochastic acceleration where turbulence plays the primary role to determine the acceleration, scattering and escape rates. Assuming a knowledge of the background plasma the model has essentially two unknown parameters, namely the momentum and pitch angle scattering diffusion coefficients, which can be evaluated given two independent spectral observations. These coefficients are obtained directly from the spectrum of radiation from the supernova remnants (SNRs), which gives the spectrum of accelerated particles, and the observed spectrum of cosmic rays (CRs), which are related to the spectrum of particles escaping the SNRs. The results obtained from application of this method will be presented.
Computing dispersion interactions in density functional theory
NASA Astrophysics Data System (ADS)
Cooper, V. R.; Kong, L.; Langreth, D. C.
2010-02-01
In this article techniques for including dispersion interactions within density functional theory are examined. In particular comparisons are made between four popular methods: dispersion corrected DFT, pseudopotential correction schemes, symmetry adapted perturbation theory, and a non-local density functional - the so called Rutgers-Chalmers van der Waals density functional (vdW-DF). The S22 benchmark data set is used to evaluate the relative accuracy of these methods and factors such as scalability and transferability are also discussed. We demonstrate that vdW-DF presents an excellent compromise between computational speed and accuracy and lends most easily to full scale application in solid materials. This claim is supported through a brief discussion of a recent large scale application to H2 in a prototype metal organic framework material (MOF), Zn2BDC2TED. The vdW-DF shows overwhelming promise for first-principles studies of physisorbed molecules in porous extended systems; thereby having broad applicability for studies as diverse as molecular adsorption and storage, battery technology, catalysis and gas separations.
Computational based functional analysis of Bacillus phytases.
Verma, Anukriti; Singh, Vinay Kumar; Gaur, Smriti
2016-02-01
Phytase is an enzyme which catalyzes the total hydrolysis of phytate to less phosphorylated myo-inositol derivatives and inorganic phosphate and digests the undigestable phytate part present in seeds and grains and therefore provides digestible phosphorus, calcium and other mineral nutrients. Phytases are frequently added to the feed of monogastric animals so that bioavailability of phytic acid-bound phosphate increases, ultimately enhancing the nutritional value of diets. The Bacillus phytase is very suitable to be used in animal feed because of its optimum pH with excellent thermal stability. Present study is aimed to perform an in silico comparative characterization and functional analysis of phytases from Bacillus amyloliquefaciens to explore physico-chemical properties using various bio-computational tools. All proteins are acidic and thermostable and can be used as suitable candidates in the feed industry.
Wang, Ying; Feng, Chenglian; Liu, Yuedan; Zhao, Yujie; Li, Huixian; Zhao, Tianhui; Guo, Wenjing
2017-02-01
Transition metals in the fourth period of the periodic table of the elements are widely widespread in aquatic environments. They could often occur at certain concentrations to cause adverse effects on aquatic life and human health. Generally, parametric models are mostly used to construct species sensitivity distributions (SSDs), which result in comparison for water quality criteria (WQC) of elements in the same period or group of the periodic table might be inaccurate and the results could be biased. To address this inadequacy, the non-parametric kernel density estimation (NPKDE) with its optimal bandwidths and testing methods were developed for establishing SSDs. The NPKDE was better fit, more robustness and better predicted than conventional normal and logistic parametric density estimations for constructing SSDs and deriving acute HC5 and WQC for transition metals in the fourth period of the periodic table. The decreasing sequence of HC5 values for the transition metals in the fourth period was Ti > Mn > V > Ni > Zn > Cu > Fe > Co > Cr(VI), which were not proportional to atomic number in the periodic table, and for different metals the relatively sensitive species were also different. The results indicated that except for physical and chemical properties there are other factors affecting toxicity mechanisms of transition metals. The proposed method enriched the methodological foundation for WQC. Meanwhile, it also provided a relatively innovative, accurate approach for the WQC derivation and risk assessment of the same group and period metals in aquatic environments to support protection of aquatic organisms.
Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; /NASA, Ames /SETI Inst., Mtn. View /San Diego State U., Astron. Dept.
2012-01-01
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the transit timing variations of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
Ford, Eric B.; Moorhead, Althea V.; Morehead, Robert C.; Fabrycky, Daniel C.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew J.; Ragozzine, Darin; Charbonneau, David; Lissauer, Jack J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Burke, Christopher J.; Caldwell, Douglas A.; Welsh, William F.; Allen, Christopher; Buchhave, Lars A.; Collaboration: Kepler Science Team; and others
2012-05-10
We present a new method for confirming transiting planets based on the combination of transit timing variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies is in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data sets. We apply this method to an analysis of the TTVs of two stars with multiple transiting planet candidates identified by Kepler. We confirm four transiting planets in two multiple-planet systems based on their TTVs and the constraints imposed by dynamical stability. An additional three candidates in these same systems are not confirmed as planets, but are likely to be validated as real planets once further observations and analyses are possible. If all were confirmed, these systems would be near 4:6:9 and 2:4:6:9 period commensurabilities. Our results demonstrate that TTVs provide a powerful tool for confirming transiting planets, including low-mass planets and planets around faint stars for which Doppler follow-up is not practical with existing facilities. Continued Kepler observations will dramatically improve the constraints on the planet masses and orbits and provide sensitivity for detecting additional non-transiting planets. If Kepler observations were extended to eight years, then a similar analysis could likely confirm systems with multiple closely spaced, small transiting planets in or near the habitable zone of solar-type stars.
A Tutorial on Analog Computation: Computing Functions over the Reals
NASA Astrophysics Data System (ADS)
Campagnolo, Manuel Lameiras
The best known programmable analog computing device is the differential analyser. The concept for the device dates back to Lord Kelvin and his brother James Thomson in 1876, and was constructed in 1932 at MIT under the supervision of Vannevar Bush. The MIT differential analyser used wheel-and-disk mechanical integrators and was able to solve sixth-order differential equations. During the 1930’s, more powerful differential analysers were built. In 1941 Claude Shannon showed that given a sufficient numbers of integrators the machines could, in theory, precisely generate the solutions of all differentially algebraic equations. Shannon’s mathematical model of the differential analyser is known as the GPAC.
Computational Interpretations of Analysis via Products of Selection Functions
NASA Astrophysics Data System (ADS)
Escardó, Martín; Oliva, Paulo
We show that the computational interpretation of full comprehension via two well-known functional interpretations (dialectica and modified realizability) corresponds to two closely related infinite products of selection functions.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
Computer program for Bessel and Hankel functions
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Saule, Arthur V.; Rice, Edward J.; Clark, Bruce J.
1991-01-01
A set of FORTRAN subroutines for calculating Bessel and Hankel functions is presented. The routines calculate Bessel and Hankel functions of the first and second kinds, as well as their derivatives, for wide ranges of integer order and real or complex argument in single or double precision. Depending on the order and argument, one of three evaluation methods is used: the power series definition, an Airy function expansion, or an asymptotic expansion. Routines to calculate Airy functions and their derivatives are also included.
Computer Use and the Relation between Age and Cognitive Functioning
ERIC Educational Resources Information Center
Soubelet, Andrea
2012-01-01
This article investigates whether computer use for leisure could mediate or moderate the relations between age and cognitive functioning. Findings supported smaller age differences in measures of cognitive functioning for people who reported spending more hours using a computer. Because of the cross-sectional design of the study, two alternative…
Singular Function Integration in Computational Physics
NASA Astrophysics Data System (ADS)
Hasbun, Javier
2009-03-01
In teaching computational methods in the undergraduate physics curriculum, standard integration approaches taught include the rectangular, trapezoidal, Simpson, Romberg, and others. Over time, these techniques have proven to be invaluable and students are encouraged to employ the most efficient method that is expected to perform best when applied to a given problem. However, some physics research applications require techniques that can handle singularities. While decreasing the step size in traditional approaches is an alternative, this may not always work and repetitive processes make this route even more inefficient. Here, I present two existing integration rules designed to handle singular integrals. I compare them to traditional rules as well as to the exact analytic results. I suggest that it is perhaps time to include such approaches in the undergraduate computational physics course.
Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea
2016-01-01
In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used
Pair correlation function integrals: Computation and use
NASA Astrophysics Data System (ADS)
Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens
2011-08-01
We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
Computing Partial Transposes and Related Entanglement Functions
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2016-12-01
The partial transpose (PT) is an important function for entanglement testing and quantification and also for the study of geometrical aspects of the quantum state space. In this article, considering general bipartite and multipartite discrete systems, explicit formulas ready for the numerical implementation of the PT and of related entanglement functions are presented and the Fortran code produced for that purpose is described. What is more, we obtain an analytical expression for the Hilbert-Schmidt entanglement of two-qudit systems and for the associated closest separable state. In contrast to previous works on this matter, we only use the properties of the PT, not applying Lagrange multipliers.
Enumeration of Bent Boolean Functions by Reconfigurable Computer
2010-05-01
Publishing Company, 1986. [10] D. H. Knuth , The Art of Computer Programming, 2nd Ed., Addison- Wesley Publishing Co., Reading, Menlo Park, London...Enumeration of Bent Boolean Functions by Reconfigurable Computer J. L. Shafer S. W. Schneider J. T. Butler P. Stănică ECE Department Department of ...it yields a new realization of the transeunt triangle that has less complexity and delay. Finally, we show computational results from a
Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions.
ERIC Educational Resources Information Center
O'Callaghan, Brian R.
1998-01-01
Describes a research project that examined the effects of the Computer-Intensive Algebra (CIA) and traditional algebra curricula on students' (N=802) understanding of the function concept. Results indicate that CIA students achieved a better understanding of functions and were better at the components of modeling, interpreting, and translating.…
Evaluation of Computer Games for Learning about Mathematical Functions
ERIC Educational Resources Information Center
Tüzün, Hakan; Arkun, Selay; Bayirtepe-Yagiz, Ezgi; Kurt, Funda; Yermeydan-Ugur, Benlihan
2008-01-01
In this study, researchers evaluated the usability of game environments for teaching and learning about mathematical functions. A 3-Dimensional multi-user computer game called as "Quest Atlantis" has been used, and an educational game about mathematical functions has been developed in parallel to the Quest Atlantis' technical and…
Mura, Maria Chiara; De Felice, Marco; Morlino, Roberta; Fuselli, Sergio
2010-01-01
In step with the need to develop statistical procedures to manage small-size environmental samples, in this work we have used concentration values of benzene (C6H6), concurrently detected by seven outdoor and indoor monitoring stations over 12 000 minutes, in order to assess the representativeness of collected data and the impact of the pollutant on indoor environment. Clearly, the former issue is strictly connected to sampling-site geometry, which proves critical to correctly retrieving information from analysis of pollutants of sanitary interest. Therefore, according to current criteria for network-planning, single stations have been interpreted as nodes of a set of adjoining triangles; then, a) node pairs have been taken into account in order to estimate pollutant stationarity on triangle sides, as well as b) node triplets, to statistically associate data from air-monitoring with the corresponding territory area, and c) node sextuplets, to assess the impact probability of the outdoor pollutant on indoor environment for each area. Distributions from the various node combinations are all non-Gaussian, in the consequently, Kruskal-Wallis (KW) non-parametric statistics has been exploited to test variability on continuous density function from each pair, triplet and sextuplet. Results from the above-mentioned statistical analysis have shown randomness of site selection, which has not allowed a reliable generalization of monitoring data to the entire selected territory, except for a single "forced" case (70%); most important, they suggest a possible procedure to optimize network design.
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Local-basis-function approach to computed tomography
NASA Astrophysics Data System (ADS)
Hanson, K. M.; Wecksung, G. W.
1985-12-01
In the local basis-function approach, a reconstruction is represented as a linear expansion of basis functions, which are arranged on a rectangular grid and possess a local region of support. The basis functions considered here are positive and may overlap. It is found that basis functions based on cubic B-splines offer significant improvements in the calculational accuracy that can be achieved with iterative tomographic reconstruction algorithms. By employing repetitive basis functions, the computational effort involved in these algorithms can be minimized through the use of tabulated values for the line or strip integrals over a single-basis function. The local nature of the basis functions reduces the difficulties associated with applying local constraints on reconstruction values, such as upper and lower limits. Since a reconstruction is specified everywhere by a set of coefficients, display of a coarsely represented image does not require an arbitrary choice of an interpolation function.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1971-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
A large-scale evaluation of computational protein function prediction.
Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-03-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Robust Computation of Morse-Smale Complexes of Bilinear Functions
Norgard, G; Bremer, P T
2010-11-30
The Morse-Smale (MS) complex has proven to be a useful tool in extracting and visualizing features from scalar-valued data. However, existing algorithms to compute the MS complex are restricted to either piecewise linear or discrete scalar fields. This paper presents a new combinatorial algorithm to compute MS complexes for two dimensional piecewise bilinear functions defined on quadrilateral meshes. We derive a new invariant of the gradient flow within a bilinear cell and use it to develop a provably correct computation which is unaffected by numerical instabilities. This includes a combinatorial algorithm to detect and classify critical points as well as a way to determine the asymptotes of cell-based saddles and their intersection with cell edges. Finally, we introduce a simple data structure to compute and store integral lines on quadrilateral meshes which by construction prevents intersections and enables us to enforce constraints on the gradient flow to preserve known invariants.
Computational design of proteins with novel structure and functions
NASA Astrophysics Data System (ADS)
Wei, Yang; Lu-Hua, Lai
2016-01-01
Computational design of proteins is a relatively new field, where scientists search the enormous sequence space for sequences that can fold into desired structure and perform desired functions. With the computational approach, proteins can be designed, for example, as regulators of biological processes, novel enzymes, or as biotherapeutics. These approaches not only provide valuable information for understanding of sequence-structure-function relations in proteins, but also hold promise for applications to protein engineering and biomedical research. In this review, we briefly introduce the rationale for computational protein design, then summarize the recent progress in this field, including de novo protein design, enzyme design, and design of protein-protein interactions. Challenges and future prospects of this field are also discussed. Project supported by the National Basic Research Program of China (Grant No. 2015CB910300), the National High Technology Research and Development Program of China (Grant No. 2012AA020308), and the National Natural Science Foundation of China (Grant No. 11021463).
Functional Characteristics of Intelligent Computer-Assisted Instruction: Intelligent Features.
ERIC Educational Resources Information Center
Park, Ok-choon
1988-01-01
Examines the functional characteristics of intelligent computer assisted instruction (ICAI) and discusses the requirements of a multidisciplinary cooperative effort of its development. A typical ICAI model is presented and intelligent features of ICAI systems are described, including modeling the student's learning process, qualitative decision…
SNAP: A computer program for generating symbolic network functions
NASA Technical Reports Server (NTRS)
Lin, P. M.; Alderson, G. E.
1970-01-01
The computer program SNAP (symbolic network analysis program) generates symbolic network functions for networks containing R, L, and C type elements and all four types of controlled sources. The program is efficient with respect to program storage and execution time. A discussion of the basic algorithms is presented, together with user's and programmer's guides.
Supporting Executive Functions during Children's Preliteracy Learning with the Computer
ERIC Educational Resources Information Center
Van de Sande, E.; Segers, E.; Verhoeven, L.
2016-01-01
The present study examined how embedded activities to support executive functions helped children to benefit from a computer intervention that targeted preliteracy skills. Three intervention groups were compared on their preliteracy gains in a randomized controlled trial design: an experimental group that worked with software to stimulate early…
Computer program for calculating and fitting thermodynamic functions
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Gordon, Sanford
1992-01-01
A computer program is described which (1) calculates thermodynamic functions (heat capacity, enthalpy, entropy, and free energy) for several optional forms of the partition function, (2) fits these functions to empirical equations by means of a least-squares fit, and (3) calculates, as a function of temperture, heats of formation and equilibrium constants. The program provides several methods for calculating ideal gas properties. For monatomic gases, three methods are given which differ in the technique used for truncating the partition function. For diatomic and polyatomic molecules, five methods are given which differ in the corrections to the rigid-rotator harmonic-oscillator approximation. A method for estimating thermodynamic functions for some species is also given.
A survey of computational intelligence techniques in protein function prediction.
Tiwari, Arvind Kumar; Srivastava, Rajeev
2014-01-01
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
Computing the hadronic vacuum polarization function by analytic continuation
Feng, Xu; Hashimoto, Shoji; Hotzel, Grit; ...
2013-08-29
We propose a method to compute the hadronic vacuum polarization function on the lattice at continuous values of photon momenta bridging between the spacelike and timelike regions. We provide two independent demonstrations to show that this method leads to the desired hadronic vacuum polarization function in Minkowski spacetime. We present with the example of the leading-order QCD correction to the muon anomalous magnetic moment that this approach can provide a valuable alternative method for calculations of physical quantities where the hadronic vacuum polarization function enters.
Community-Wide Evaluation of Computational Function Prediction.
Friedberg, Iddo; Radivojac, Predrag
2017-01-01
A biological experiment is the most reliable way of assigning function to a protein. However, in the era of high-throughput sequencing, scientists are unable to carry out experiments to determine the function of every single gene product. Therefore, to gain insights into the activity of these molecules and guide experiments, we must rely on computational means to functionally annotate the majority of sequence data. To understand how well these algorithms perform, we have established a challenge involving a broad scientific community in which we evaluate different annotation methods according to their ability to predict the associations between previously unannotated protein sequences and Gene Ontology terms. Here we discuss the rationale, benefits, and issues associated with evaluating computational methods in an ongoing community-wide challenge.
Computation of three-dimensional flows using two stream functions
NASA Technical Reports Server (NTRS)
Greywall, Mahesh S.
1991-01-01
An approach to compute 3-D flows using two stream functions is presented. The method generates a boundary fitted grid as part of its solution. Commonly used two steps for computing the flow fields are combined into a single step in the present approach: (1) boundary fitted grid generation; and (2) solution of Navier-Stokes equations on the generated grid. The presented method can be used to directly compute 3-D viscous flows, or the potential flow approximation of this method can be used to generate grids for other algorithms to compute 3-D viscous flows. The independent variables used are chi, a spatial coordinate, and xi and eta, values of stream functions along two sets of suitably chosen intersecting stream surfaces. The dependent variables used are the streamwise velocity, and two functions that describe the stream surfaces. Since for a 3-D flow there is no unique way to define two sets of intersecting stream surfaces to cover the given flow, different types of two sets of intersecting stream surfaces are considered. First, the metric of the (chi, xi, eta) curvilinear coordinate system associated with each type is presented. Next, equations for the steady state transport of mass, momentum, and energy are presented in terms of the metric of the (chi, xi, eta) coordinate system. Also included are the inviscid and the parabolized approximations to the general transport equations.
Andersson, Jesper L R; Sotiropoulos, Stamatios N
2015-11-15
Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of "Kriging". We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell.
Corvaisier, S; Bleyzac, N; Confesson, M A; Bureau, C; Maire, P
1997-01-01
To establish a reference for MAP Bayesian adaptive control of amikacin therapy in non-insulin-dependent diabetic patients, 30 patients (age: 63.5 +/- 10.1 years) were studied. Weight (84.2 +/- 15.4 kg) and body mass index (28.0 +/- 4.3 kg/m2 for males and 30.5 +/- 6.4 kg/m2 for females) were stable during treatment. Creatinine clearance (CCr) was 70.3 +/- 27.2 ml/min/1.73 m2 before treatment and 69.6 +/- 24.3 ml/min/1.73 m2 (NS) at the end of treatment (2 to 15 days). 129 serum concentrations were drawn (4.8 +/- 2.6 levels per patient). The one-compartment model was parameterized as having Vs (l.kg-1) and Kslope (min/ml.h) for each unit of CCr (Kel = Kintercept + Kslope x CCr). The non-renal Kintercept was fixed at 0.00693 h-1. The NPEM computes the joint probability densities. The mean, median, and SD were respectively: Vs = 0.3574, 0.3654, 0.0825 l.kg-1; Kslope = 0.0026, 0.0027, 0.0007 min/ml.h. For the a priori first doses determination, precision is higher with the new population. No difference in adaptive control was observed. In additive, the full joint density probability should be used to develop stochastic multiple model linear quadratic (MMLQ) adaptive control strategies.
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high
Ong, Lee-Ling S; Wang, Mengmeng; Dauwels, Justin; Asada, H Harry
2014-01-01
An approach to jointly estimate 3D shapes and poses of stained nuclei from confocal microscopy images, using statistical prior information, is presented. Extracting nuclei boundaries from our experimental images of cell migration is challenging due to clustered nuclei and variations in their shapes. This issue is formulated as a maximum a posteriori estimation problem. By incorporating statistical prior models of 3D nuclei shapes into level set functions, the active contour evolutions applied on the images is constrained. A 3D alignment algorithm is developed to build the training databases and to match contours obtained from the images to them. To address the issue of aligning the model over multiple clustered nuclei, a watershed-like technique is used to detect and separate clustered regions prior to active contour evolution. Our method is tested on confocal images of endothelial cells in microfluidic devices, compared with existing approaches.
Computational design of receptor and sensor proteins with novel functions
NASA Astrophysics Data System (ADS)
Looger, Loren L.; Dwyer, Mary A.; Smith, James J.; Hellinga, Homme W.
2003-05-01
The formation of complexes between proteins and ligands is fundamental to biological processes at the molecular level. Manipulation of molecular recognition between ligands and proteins is therefore important for basic biological studies and has many biotechnological applications, including the construction of enzymes, biosensors, genetic circuits, signal transduction pathways and chiral separations. The systematic manipulation of binding sites remains a major challenge. Computational design offers enormous generality for engineering protein structure and function. Here we present a structure-based computational method that can drastically redesign protein ligand-binding specificities. This method was used to construct soluble receptors that bind trinitrotoluene, L-lactate or serotonin with high selectivity and affinity. These engineered receptors can function as biosensors for their new ligands; we also incorporated them into synthetic bacterial signal transduction pathways, regulating gene expression in response to extracellular trinitrotoluene or L-lactate. The use of various ligands and proteins shows that a high degree of control over biomolecular recognition has been established computationally. The biological and biosensing activities of the designed receptors illustrate potential applications of computational design.
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function. PMID:28220071
Efficient Computation of Functional Brain Networks: toward Real-Time Functional Connectivity.
García-Prieto, Juan; Bajo, Ricardo; Pereda, Ernesto
2017-01-01
Functional Connectivity has demonstrated to be a key concept for unraveling how the brain balances functional segregation and integration properties while processing information. This work presents a set of open-source tools that significantly increase computational efficiency of some well-known connectivity indices and Graph-Theory measures. PLV, PLI, ImC, and wPLI as Phase Synchronization measures, Mutual Information as an information theory based measure, and Generalized Synchronization indices are computed much more efficiently than prior open-source available implementations. Furthermore, network theory related measures like Strength, Shortest Path Length, Clustering Coefficient, and Betweenness Centrality are also implemented showing computational times up to thousands of times faster than most well-known implementations. Altogether, this work significantly expands what can be computed in feasible times, even enabling whole-head real-time network analysis of brain function.
Computations involving differential operators and their actions on functions
NASA Technical Reports Server (NTRS)
Crouch, Peter E.; Grossman, Robert; Larson, Richard
1991-01-01
The algorithms derived by Grossmann and Larson (1989) are further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear dynamical systems. These algorithms are extended in two different directions: the algorithms are generalized so that they apply to differential operators on groups and the data structures and algorithms are developed to compute symbolically the action of differential operators on functions. Both of these generalizations are needed for applications.
Efficient quantum algorithm for computing n-time correlation functions.
Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E
2014-07-11
We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.
Computational prediction of functional abortive RNA in E. coli.
Marcus, Jeremy I; Hassoun, Soha; Nair, Nikhil U
2017-03-24
Failure by RNA polymerase to break contacts with promoter DNA results in release of bound RNA and re-initiation of transcription. These abortive RNAs were assumed to be non-functional but have recently been shown to affect termination in bacteriophage T7. Little is known about the functional role of these RNA in other genetic models. Using a computational approach, we investigated whether abortive RNA could exert function in E. coli. Fragments generated from 3780 transcription units were used as query sequences within their respective transcription units to search for possible binding sites. Sites that fell within known regulatory features were then ranked based upon the free energy of hybridization to the abortive. We further hypothesize about mechanisms of regulatory action for a select number of likely matches. Future experimental validation of these putative abortive-mRNA pairs may confirm our findings and promote exploration of functional abortive RNAs (faRNAs) in natural and synthetic systems.
NASA Astrophysics Data System (ADS)
Schutte, Willem D.; Swanepoel, Jan W. H.
2016-09-01
An automated tool to derive the off-pulse interval of a light curve originating from a pulsar is needed. First, we derive a powerful and accurate non-parametric sequential estimation technique to estimate the off-pulse interval of a pulsar light curve in an objective manner. This is in contrast to the subjective `eye-ball' (visual) technique, and complementary to the Bayesian Block method which is currently used in the literature. The second aim involves the development of a statistical package, necessary for the implementation of our new estimation technique. We develop a statistical procedure to estimate the off-pulse interval in the presence of noise. It is based on a sequential application of p-values obtained from goodness-of-fit tests for uniformity. The Kolmogorov-Smirnov, Cramér-von Mises, Anderson-Darling and Rayleigh test statistics are applied. The details of the newly developed statistical package SOPIE (Sequential Off-Pulse Interval Estimation) are discussed. The developed estimation procedure is applied to simulated and real pulsar data. Finally, the SOPIE estimated off-pulse intervals of two pulsars are compared to the estimates obtained with the Bayesian Block method and yield very satisfactory results. We provide the code to implement the SOPIE package, which is publicly available at http://CRAN.R-project.org/package=SOPIE (Schutte).
Chen, Chin-Wei; Cote, Patrick; Ferrarese, Laura; West, Andrew A.; Peng, Eric W.
2010-11-15
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly {approx}10{sup 3} in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sersic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be {sigma}(B{sub T}){approx} 0.13 mag for the brightest galaxies, rising to {approx} 0.3 mag for galaxies at the faint end of our sample (B{sub T} {approx} 16). The distribution of axial ratios of low-mass ('dwarf') galaxies bears a strong resemblance to the one observed for the higher-mass ('giant') galaxies. The global structural parameters for the full galaxy sample-profile shape, effective radius, and mean surface brightness-are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a {approx}7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
NASA Astrophysics Data System (ADS)
Chen, Chin-Wei; Côté, Patrick; West, Andrew A.; Peng, Eric W.; Ferrarese, Laura
2010-11-01
We present photometric and structural parameters for 100 ACS Virgo Cluster Survey (ACSVCS) galaxies based on homogeneous, multi-wavelength (ugriz), wide-field SDSS (DR5) imaging. These early-type galaxies, which trace out the red sequence in the Virgo Cluster, span a factor of nearly ~103 in g-band luminosity. We describe an automated pipeline that generates background-subtracted mosaic images, masks field sources and measures mean shapes, total magnitudes, effective radii, and effective surface brightnesses using a model-independent approach. A parametric analysis of the surface brightness profiles is also carried out to obtain Sérsic-based structural parameters and mean galaxy colors. We compare the galaxy parameters to those in the literature, including those from the ACSVCS, finding good agreement in most cases, although the sizes of the brightest, and most extended, galaxies are found to be most uncertain and model dependent. Our photometry provides an external measurement of the random errors on total magnitudes from the widely used Virgo Cluster Catalog, which we estimate to be σ(BT )≈ 0.13 mag for the brightest galaxies, rising to ≈ 0.3 mag for galaxies at the faint end of our sample (BT ≈ 16). The distribution of axial ratios of low-mass ("dwarf") galaxies bears a strong resemblance to the one observed for the higher-mass ("giant") galaxies. The global structural parameters for the full galaxy sample—profile shape, effective radius, and mean surface brightness—are found to vary smoothly and systematically as a function of luminosity, with unmistakable evidence for changes in structural homology along the red sequence. As noted in previous studies, the ugriz galaxy colors show a nonlinear but smooth variation over a ~7 mag range in absolute magnitude, with an enhanced scatter for the faintest systems that is likely the signature of their more diverse star formation histories.
Computational approaches for inferring the functions of intrinsically disordered proteins
Varadi, Mihaly; Vranken, Wim; Guharoy, Mainak; Tompa, Peter
2015-01-01
Intrinsically disordered proteins (IDPs) are ubiquitously involved in cellular processes and often implicated in human pathological conditions. The critical biological roles of these proteins, despite not adopting a well-defined fold, encouraged structural biologists to revisit their views on the protein structure-function paradigm. Unfortunately, investigating the characteristics and describing the structural behavior of IDPs is far from trivial, and inferring the function(s) of a disordered protein region remains a major challenge. Computational methods have proven particularly relevant for studying IDPs: on the sequence level their dependence on distinct characteristics determined by the local amino acid context makes sequence-based prediction algorithms viable and reliable tools for large scale analyses, while on the structure level the in silico integration of fundamentally different experimental data types is essential to describe the behavior of a flexible protein chain. Here, we offer an overview of the latest developments and computational techniques that aim to uncover how protein function is connected to intrinsic disorder. PMID:26301226
On the Hydrodynamic Function of Sharkskin: A Computational Investigation
NASA Astrophysics Data System (ADS)
Boomsma, Aaron; Sotiropoulos, Fotis
2014-11-01
Denticles (placoid scales) are small structures that cover the epidermis of some sharks. The hydrodynamic function of denticles is unclear. Because they resemble riblets, they have been thought to passively reduce skin-friction-for which there is some experimental evidence. Others have experimentally shown that denticles increase skin-friction and have hypothesized that denticles act as vortex generators to delay separation. To help clarify their function, we use high-resolution large eddy and direct numerical simulations, with an immersed boundary method, to simulate flow patterns past and calculate the drag force on Mako Short Fin denticles. Simulations are carried out for the denticles placed in a canonical turbulent boundary layer as well as in the vicinity of a separation bubble. The computed results elucidate the three-dimensional structure of the flow around denticles and provide insights into the hydrodynamic function of sharkskin.
Structure-based Methods for Computational Protein Functional Site Prediction
Dukka, B KC
2013-01-01
Due to the advent of high throughput sequencing techniques and structural genomic projects, the number of gene and protein sequences has been ever increasing. Computational methods to annotate these genes and proteins are even more indispensable. Proteins are important macromolecules and study of the function of proteins is an important problem in structural bioinformatics. This paper discusses a number of methods to predict protein functional site especially focusing on protein ligand binding site prediction. Initially, a short overview is presented on recent advances in methods for selection of homologous sequences. Furthermore, a few recent structural based approaches and sequence-and-structure based approaches for protein functional sites are discussed in details. PMID:24688745
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow...
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy.
Schroll, Henning; Hamker, Fred H
2013-12-30
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
Complete RNA inverse folding: computational design of functional hammerhead ribozymes
Dotu, Ivan; Garcia-Martin, Juan Antonio; Slinger, Betty L.; Mechery, Vinodh; Meyer, Michelle M.; Clote, Peter
2014-01-01
Nanotechnology and synthetic biology currently constitute one of the most innovative, interdisciplinary fields of research, poised to radically transform society in the 21st century. This paper concerns the synthetic design of ribonucleic acid molecules, using our recent algorithm, RNAiFold, which can determine all RNA sequences whose minimum free energy secondary structure is a user-specified target structure. Using RNAiFold, we design ten cis-cleaving hammerhead ribozymes, all of which are shown to be functional by a cleavage assay. We additionally use RNAiFold to design a functional cis-cleaving hammerhead as a modular unit of a synthetic larger RNA. Analysis of kinetics on this small set of hammerheads suggests that cleavage rate of computationally designed ribozymes may be correlated with positional entropy, ensemble defect, structural flexibility/rigidity and related measures. Artificial ribozymes have been designed in the past either manually or by SELEX (Systematic Evolution of Ligands by Exponential Enrichment); however, this appears to be the first purely computational design and experimental validation of novel functional ribozymes. RNAiFold is available at http://bioinformatics.bc.edu/clotelab/RNAiFold/. PMID:25209235
Computer Modeling of the Earliest Cellular Structures and Functions
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl
2000-01-01
In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membrane< We will discuss a series of large-scale, molecular-level computer simulations which demonstrate (a) how small proteins (peptides) organize themselves into ordered structures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.
NASA Astrophysics Data System (ADS)
Soberaski, J.; Moysey, S.; Bedient, P.
2007-05-01
Remote sensing data and GIS tools have opened the door to simplify the parameterization of distributed watershed models. However, decisions about the spatial homogeneity of model parameters should also be based on the actual response of a basin to rainfall. For the last 75 years, hydrologists have relied on the unit hydrograph (UH) as a key tool for analyzing watersheds because its shape is directly related to important attributes of the drainage basin controlling runoff (e.g., topography, land use, soil properties, stream network, etc.). Deconvolution of excess rainfall from direct runoff can provide non-parametric estimates of the UH that capture the effects of sub-basin heterogeneity, thereby making these hydrographs particularly useful tools for comparing and classifying watersheds. Due to the mathematical instability of deconvolution, it is unclear whether meaningful UH estimates can be obtained for the purpose of inter-basin comparisons, particularly when processes controlling excess precipitation and direct runoff within the watershed are uncertain. This study evaluates the sensitivity of non-parametric UH's to uncertainty in watershed properties for six gauged sub-basins of the Cypress Creek Watershed, TX. We have used MATLAB to conduct a rainfall-runoff analysis of the Cypress Creek Watershed, TX over a 17 day period during Tropical Storm Allison in 2001. For the six basins analyzed, discharges for Cypress Creek are available at the outflow of each sub-basin and NEXRAD rainfall data are available throughout the watershed. To determine the direct runoff contributed by each sub-basin, incoming upstream flows were routed by simple advection and then subtracted from the downstream discharge record. Excess precipitation was calculated by applying the Green & Ampt infiltration model to the rainfall record for each basin after accounting for initial abstractions and direct losses due to impervious surfaces. In each step of this procedure, the parameters
Non-functioning adrenal adenomas discovered incidentally on computed tomography
Mitnick, J.S.; Bosniak, M.A.; Megibow, A.J.; Naidich, D.P.
1983-08-01
Eighteen patients with unilateral non-metastatic non-functioning adrenal masses were studied with computed tomography (CT). Pathological examination in cases revealed benign adrenal adenomas. The others were followed up with serial CT scans and found to show no change in tumor size over a period of six months to three years. On the basis of these findings, the authors suggest certain criteria of a benign adrenal mass, including (a) diameter less than 5 cm, (b) smooth contour, (c) well-defined margin, and (d) no change in size on follow-up. Serial CT scanning can be used as an alternative to surgery in the management of many of these patients.
Material reconstruction for spectral computed tomography with detector response function
NASA Astrophysics Data System (ADS)
Liu, Jiulong; Gao, Hao
2016-11-01
Different from conventional computed tomography (CT), spectral CT using energy-resolved photon-counting detectors is able to provide the unprecedented material compositions. However accurate spectral CT needs to account for the detector response function (DRF), which is often distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. The simulation results suggest that the proposed methods reconstructed more accurate material compositions than the conventional method without DRF. Moreover, the proposed linearized method with linear data fidelity from spectral resampling had improved reconstruction quality from the nonlinear method directly based on nonlinear data fidelity.
NASA Astrophysics Data System (ADS)
Constantinescu, C. C.; Yoder, K. K.; Kareken, D. A.; Bouman, C. A.; O'Connor, S. J.; Normandin, M. D.; Morris, E. D.
2008-03-01
We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest & activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (FDA(t)) and the change in binding potential (ΔBP). The veracity of the FDA(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) ΔBP should decline with increasing DA peak time, (2) ΔBP should increase as the strength of the temporal correlation between FDA(t) and the free raclopride (FRAC(t)) curve increases, (3) ΔBP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [11C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover FDA(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the FDA(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of FDA(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.
Breitling, Rainer; Herzyk, Pawel
2005-10-01
We have recently introduced a rank-based test statistic, RankProducts (RP), as a new non-parametric method for detecting differentially expressed genes in microarray experiments. It has been shown to generate surprisingly good results with biological datasets. The basis for this performance and the limits of the method are, however, little understood. Here we explore the performance of such rank-based approaches under a variety of conditions using simulated microarray data, and compare it with classical Wilcoxon rank sums and t-statistics, which form the basis of most alternative differential gene expression detection techniques. We show that for realistic simulated microarray datasets, RP is more powerful and accurate for sorting genes by differential expression than t-statistics or Wilcoxon rank sums - in particular for replicate numbers below 10, which are most commonly used in biological experiments. Its relative performance is particularly strong when the data are contaminated by non-normal random noise or when the samples are very inhomogenous, e.g. because they come from different time points or contain a mixture of affected and unaffected cells. However, RP assumes equal measurement variance for all genes and tends to give overly optimistic p-values when this assumption is violated. It is therefore essential that proper variance stabilizing normalization is performed on the data before calculating the RP values. Where this is impossible, another rank-based variant of RP (average ranks) provides a useful alternative with very similar overall performance. The Perl scripts implementing the simulation and evaluation are available upon request. Implementations of the RP method are available for download from the authors website (http://www.brc.dcs.gla.ac.uk/glama).
Computation of the lattice Green function for a dislocation
NASA Astrophysics Data System (ADS)
Tan, Anne Marie Z.; Trinkle, Dallas R.
2016-08-01
Modeling isolated dislocations is challenging due to their long-ranged strain fields. Flexible boundary condition methods capture the correct long-range strain field of a defect by coupling the defect core to an infinite harmonic bulk through the lattice Green function (LGF). To improve the accuracy and efficiency of flexible boundary condition methods, we develop a numerical method to compute the LGF specifically for a dislocation geometry; in contrast to previous methods, where the LGF was computed for the perfect bulk as an approximation for the dislocation. Our approach directly accounts for the topology of a dislocation, and the errors in the LGF computation converge rapidly for edge dislocations in a simple cubic model system as well as in BCC Fe with an empirical potential. When used within the flexible boundary condition approach, the dislocation LGF relaxes dislocation core geometries in fewer iterations than when the perfect bulk LGF is used as an approximation for the dislocation, making a flexible boundary condition approach more efficient.
CMB anisotropy in compact hyperbolic universes. I. Computing correlation functions
NASA Astrophysics Data System (ADS)
Bond, J. Richard; Pogosyan, Dmitry; Souradeep, Tarun
2000-08-01
Cosmic microwave background (CMB) anisotropy measurements have brought the issue of global topology of the universe from the realm of theoretical possibility to within the grasp of observations. The global topology of the universe modifies the correlation properties of cosmic fields. In particular, strong correlations are predicted in CMB anisotropy patterns on the largest observable scales if the size of the universe is comparable to the distance to the CMB last scattering surface. We describe in detail our completely general scheme using a regularized method of images for calculating such correlation functions in models with nontrivial topology, and apply it to the computationally challenging compact hyperbolic spaces. Our procedure directly sums over images within a specified radius, ideally many times the diameter of the space, effectively treats more distant images in a continuous approximation, and uses Cesaro resummation to further sharpen the results. At all levels of approximation the symmetries of the space are preserved in the correlation function. This new technique eliminates the need for the difficult task of spatial eigenmode decomposition on these spaces. Although the eigenspectrum can be obtained by this method if desired, at a given level of approximation the correlation functions are more accurately determined. We use the 3-torus example to demonstrate that the method works very well. We apply it to power spectrum as well as correlation function evaluations in a number of compact hyperbolic (CH) spaces. Application to the computation of CMB anisotropy correlations on CH spaces, and the observational constraints following from them, are given in a companion paper.
Enzymatic Halogenases and Haloperoxidases: Computational Studies on Mechanism and Function.
Timmins, Amy; de Visser, Sam P
2015-01-01
Despite the fact that halogenated compounds are rare in biology, a number of organisms have developed processes to utilize halogens and in recent years, a string of enzymes have been identified that selectively insert halogen atoms into, for instance, a CH aliphatic bond. Thus, a number of natural products, including antibiotics, contain halogenated functional groups. This unusual process has great relevance to the chemical industry for stereoselective and regiospecific synthesis of haloalkanes. Currently, however, industry utilizes few applications of biological haloperoxidases and halogenases, but efforts are being worked on to understand their catalytic mechanism, so that their catalytic function can be upscaled. In this review, we summarize experimental and computational studies on the catalytic mechanism of a range of haloperoxidases and halogenases with structurally very different catalytic features and cofactors. This chapter gives an overview of heme-dependent haloperoxidases, nonheme vanadium-dependent haloperoxidases, and flavin adenine dinucleotide-dependent haloperoxidases. In addition, we discuss the S-adenosyl-l-methionine fluoridase and nonheme iron/α-ketoglutarate-dependent halogenases. In particular, computational efforts have been applied extensively for several of these haloperoxidases and halogenases and have given insight into the essential structural features that enable these enzymes to perform the unusual halogen atom transfer to substrates.
Functional Connectivity’s Degenerate View of Brain Computation
Giron, Alain; Rudrauf, David
2016-01-01
Brain computation relies on effective interactions between ensembles of neurons. In neuroimaging, measures of functional connectivity (FC) aim at statistically quantifying such interactions, often to study normal or pathological cognition. Their capacity to reflect a meaningful variety of patterns as expected from neural computation in relation to cognitive processes remains debated. The relative weights of time-varying local neurophysiological dynamics versus static structural connectivity (SC) in the generation of FC as measured remains unsettled. Empirical evidence features mixed results: from little to significant FC variability and correlation with cognitive functions, within and between participants. We used a unified approach combining multivariate analysis, bootstrap and computational modeling to characterize the potential variety of patterns of FC and SC both qualitatively and quantitatively. Empirical data and simulations from generative models with different dynamical behaviors demonstrated, largely irrespective of FC metrics, that a linear subspace with dimension one or two could explain much of the variability across patterns of FC. On the contrary, the variability across BOLD time-courses could not be reduced to such a small subspace. FC appeared to strongly reflect SC and to be partly governed by a Gaussian process. The main differences between simulated and empirical data related to limitations of DWI-based SC estimation (and SC itself could then be estimated from FC). Above and beyond the limited dynamical range of the BOLD signal itself, measures of FC may offer a degenerate representation of brain interactions, with limited access to the underlying complexity. They feature an invariant common core, reflecting the channel capacity of the network as conditioned by SC, with a limited, though perhaps meaningful residual variability. PMID:27736900
Filter design for molecular factor computing using wavelet functions.
Li, Xiaoyong; Xu, Zhihong; Cai, Wensheng; Shao, Xueguang
2015-06-23
Molecular factor computing (MFC) is a new strategy that employs chemometric methods in an optical instrument to obtain analytical results directly using an appropriate filter without data processing. In the present contribution, a method for designing an MFC filter using wavelet functions was proposed for spectroscopic analysis. In this method, the MFC filter is designed as a linear combination of a set of wavelet functions. A multiple linear regression model relating the concentration to the wavelet coefficients is constructed, so that the wavelet coefficients are obtained by projecting the spectra onto the selected wavelet functions. These wavelet functions are selected by optimizing the model using a genetic algorithm (GA). Once the MFC filter is obtained, the concentration of a sample can be calculated directly by projecting the spectrum onto the filter. With three NIR datasets of corn, wheat and blood, it was shown that the performance of the designed filter is better than that of the optimized partial least squares models, and commonly used signal processing methods, such as background correction and variable selection, were not needed. More importantly, the designed filter can be used as an MFC filter in designing MFC-based instruments.
Assessing executive function using a computer game: computational modeling of cognitive processes.
Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha
2014-07-01
Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home.
Computer Modeling of Protocellular Functions: Peptide Insertion in Membranes
NASA Technical Reports Server (NTRS)
Rodriquez-Gomez, D.; Darve, E.; Pohorille, A.
2006-01-01
Lipid vesicles became the precursors to protocells by acquiring the capabilities needed to survive and reproduce. These include transport of ions, nutrients and waste products across cell walls and capture of energy and its conversion into a chemically usable form. In modem organisms these functions are carried out by membrane-bound proteins (about 30% of the genome codes for this kind of proteins). A number of properties of alpha-helical peptides suggest that their associations are excellent candidates for protobiological precursors of proteins. In particular, some simple a-helical peptides can aggregate spontaneously and form functional channels. This process can be described conceptually by a three-step thermodynamic cycle: 1 - folding of helices at the water-membrane interface, 2 - helix insertion into the lipid bilayer and 3 - specific interactions of these helices that result in functional tertiary structures. Although a crucial step, helix insertion has not been adequately studied because of the insolubility and aggregation of hydrophobic peptides. In this work, we use computer simulation methods (Molecular Dynamics) to characterize the energetics of helix insertion and we discuss its importance in an evolutionary context. Specifically, helices could self-assemble only if their interactions were sufficiently strong to compensate the unfavorable Free Energy of insertion of individual helices into membranes, providing a selection mechanism for protobiological evolution.
An Evolutionary Computation Approach to Examine Functional Brain Plasticity
Roy, Arnab; Campbell, Colin; Bernier, Rachel A.; Hillary, Frank G.
2016-01-01
One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs) evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC) based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair) such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN) and the executive control network (ECN) during recovery from traumatic brain injury (TBI); the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in the strength
Computational Effective Fault Detection by Means of Signature Functions
Baranski, Przemyslaw; Pietrzak, Piotr
2016-01-01
The paper presents a computationally effective method for fault detection. A system’s responses are measured under healthy and ill conditions. These signals are used to calculate so-called signature functions that create a signal space. The current system’s response is projected into this space. The signal location in this space easily allows to determine the fault. No classifier such as a neural network, hidden Markov models, etc. is required. The advantage of this proposed method is its efficiency, as computing projections amount to calculating dot products. Therefore, this method is suitable for real-time embedded systems due to its simplicity and undemanding processing capabilities which permit the use of low-cost hardware and allow rapid implementation. The approach performs well for systems that can be considered linear and stationary. The communication presents an application, whereby an industrial process of moulding is supervised. The machine is composed of forms (dies) whose alignment must be precisely set and maintained during the work. Typically, the process is stopped periodically to manually control the alignment. The applied algorithm allows on-line monitoring of the device by analysing the acceleration signal from a sensor mounted on a die. This enables to detect failures at an early stage thus prolonging the machine’s life. PMID:26949942
Using computational biophysics to understand protein evolution and function
NASA Astrophysics Data System (ADS)
Ytreberg, F. Marty
2010-10-01
Understanding how proteins evolve and function is vital for human health (e.g., developing better drugs, predicting the outbreak of disease, etc.). In spite of its importance, little is known about the underlying molecular mechanisms behind these biological processes. Computational biophysics has emerged as a useful tool in this area due to its unique ability to obtain a detailed, atomistic view of proteins and how they interact. I will give two examples from our studies where computational biophysics has provided valuable insight: (i) Protein evolution in viruses. Our results suggest that the amino acid changes that occur during high temperature evolution of a virus decrease the binding free energy of the capsid, i.e., these changes increase capsid stability. (ii) Determining realistic structural ensembles for intrinsically disordered proteins. Most methods for determining protein structure rely on the protein folding into a single conformation, and thus are not suitable for disordered proteins. I will describe a new approach that combines experiment and simulation to generate structures for disordered proteins.
Optimizing high performance computing workflow for protein functional annotation.
Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene
2014-09-10
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.
Imaging local brain function with emission computed tomography
Kuhl, D.E.
1984-03-01
Positron emission tomography (PET) using /sup 18/F-fluorodeoxyglucose (FDG) was used to map local cerebral glucose utilization in the study of local cerebral function. This information differs fundamentally from structural assessment by means of computed tomography (CT). In normal human volunteers, the FDG scan was used to determine the cerebral metabolic response to conrolled sensory stimulation and the effects of aging. Cerebral metabolic patterns are distinctive among depressed and demented elderly patients. The FDG scan appears normal in the depressed patient, studded with multiple metabolic defects in patients with multiple infarct dementia, and in the patients with Alzheimer disease, metabolism is particularly reduced in the parietal cortex, but only slightly reduced in the caudate and thalamus. The interictal FDG scan effectively detects hypometabolic brain zones that are sites of onset for seizures in patients with partial epilepsy, even though these zones usually appear normal on CT scans. The future prospects of PET are discussed.
NASA Astrophysics Data System (ADS)
Ruppin, F.; Adam, R.; Comis, B.; Ade, P.; André, P.; Arnaud, M.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; D'Addabbo, A.; De Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Leclercq, S.; Macías-Pérez, J. F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Pointecouteau, E.; Ponthieu, N.; Pratt, G. W.; Revéret, V.; Ritacco, A.; Rodriguez, L.; Romero, C.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.
2017-01-01
The determination of the thermodynamic properties of clusters of galaxies at intermediate and high redshift can bring new insights into the formation of large-scale structures. It is essential for a robust calibration of the mass-observable scaling relations and their scatter, which are key ingredients for precise cosmology using cluster statistics. Here we illustrate an application of high resolution (<20 arcsec) thermal Sunyaev-Zel'dovich (tSZ) observations by probing the intracluster medium (ICM) of the Planck-discovered galaxy cluster PSZ1 G045.85+57.71 at redshift z = 0.61, using tSZ data obtained with the NIKA camera, which is a dual-band (150 and 260 GHz) instrument operated at the IRAM 30-m telescope. We deproject jointly NIKA and Planck data to extract the electronic pressure distribution from the cluster core (R 0.02 R500) to its outskirts (R 3 R500) non-parametrically for the first time at intermediate redshift. The constraints on the resulting pressure profile allow us to reduce the relative uncertainty on the integrated Compton parameter by a factor of two compared to the Planck value. Combining the tSZ data and the deprojected electronic density profile from XMM-Newton allows us to undertake a hydrostatic mass analysis, for which we study the impact of a spherical model assumption on the total mass estimate. We also investigate the radial temperature and entropy distributions. These data indicate that PSZ1 G045.85+57.71 is a massive (M500 5.5 × 1014M⊙) cool-core cluster. This work is part of a pilot study aiming at optimizing the treatment of the NIKA2 tSZ large program dedicated to the follow-up of SZ-discovered clusters at intermediate and high redshifts. This study illustrates the potential of NIKA2 to put constraints on thethermodynamic properties and tSZ-scaling relations of these clusters, and demonstrates the excellent synergy between tSZ and X-ray observations of similar angular resolution.
Garashchuk, Sophya
2007-04-21
The de Broglie-Bohm formulation of the Schrodinger equation implies conservation of the wave function probability density associated with each quantum trajectory in closed systems. This conservation property greatly simplifies numerical implementations of the quantum trajectory dynamics and increases its accuracy. The reconstruction of a wave function, however, becomes expensive or inaccurate as it requires fitting or interpolation procedures. In this paper we present a method of computing wave packet correlation functions and wave function projections, which typically contain all the desired information about dynamics, without the full knowledge of the wave function by making quadratic expansions of the wave function phase and amplitude near each trajectory similar to expansions used in semiclassical methods. Computation of the quantities of interest in this procedure is linear with respect to the number of trajectories. The introduced approximations are consistent with approximate quantum potential dynamics method. The projection technique is applied to model chemical systems and to the H+H(2) exchange reaction in three dimensions.
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less
Chemical Visualization of Boolean Functions: A Simple Chemical Computer
NASA Astrophysics Data System (ADS)
Blittersdorf, R.; Müller, J.; Schneider, F. W.
1995-08-01
We present a chemical realization of the Boolean functions AND, OR, NAND, and NOR with a neutralization reaction carried out in three coupled continuous flow stirred tank reactors (CSTR). Two of these CSTR's are used as input reactors, the third reactor marks the output. The chemical reaction is the neutralization of hydrochloric acid (HCl) with sodium hydroxide (NaOH) in the presence of phenolphtalein as an indicator, which is red in alkaline solutions and colorless in acidic solutions representing the two binary states 1 and 0, respectively. The time required for a "chemical computation" is determined by the flow rate of reactant solutions into the reactors since the neutralization reaction itself is very fast. While the acid flow to all reactors is equal and constant, the flow rate of NaOH solution controls the states of the input reactors. The connectivities between the input and output reactors determine the flow rate of NaOH solution into the output reactor, according to the chosen Boolean function. Thus the state of the output reactor depends on the states of the input reactors.
A computer vision based candidate for functional balance test.
Nalci, Alican; Khodamoradi, Alireza; Balkan, Ozgur; Nahab, Fatta; Garudadri, Harinath
2015-08-01
Balance in humans is a motor skill based on complex multimodal sensing, processing and control. Ability to maintain balance in activities of daily living (ADL) is compromised due to aging, diseases, injuries and environmental factors. Center for Disease Control and Prevention (CDC) estimate of the costs of falls among older adults was $34 billion in 2013 and is expected to reach $54.9 billion in 2020. In this paper, we present a brief review of balance impairments followed by subjective and objective tools currently used in clinical settings for human balance assessment. We propose a novel computer vision (CV) based approach as a candidate for functional balance test. The test will take less than a minute to administer and expected to be objective, repeatable and highly discriminative in quantifying ability to maintain posture and balance. We present an informal study with preliminary data from 10 healthy volunteers, and compare performance with a balance assessment system called BTrackS Balance Assessment Board. Our results show high degree of correlation with BTrackS. The proposed system promises to be a good candidate for objective functional balance tests and warrants further investigations to assess validity in clinical settings, including acute care, long term care and assisted living care facilities. Our long term goals include non-intrusive approaches to assess balance competence during ADL in independent living environments.
Computing black hole partition functions from quasinormal modes
NASA Astrophysics Data System (ADS)
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-01
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulate an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. We then discuss the application of such techniques to more complicated spacetimes.
Goovaerts, P.
2008-01-01
Indicator kriging provides a flexible interpolation approach that is well suited for datasets where: 1) many observations are below the detection limit, 2) the histogram is strongly skewed, or 3) specific classes of attribute values are better connected in space than others (e.g. low pollutant concentrations). To apply indicator kriging at its full potential requires, however, the tedious inference and modeling of multiple indicator semivariograms, as well as the post-processing of the results to retrieve attribute estimates and associated measures of uncertainty. This paper presents a computer code that performs automatically the following tasks: selection of thresholds for binary coding of continuous data, computation and modeling of indicator semivariograms, modeling of probability distributions at unmonitored locations (regular or irregular grids), and estimation of the mean and variance of these distributions. The program also offers tools for quantifying the goodness of the model of uncertainty within a cross-validation and jack-knife frameworks. The different functionalities are illustrated using heavy metal concentrations from the well-known soil Jura dataset. A sensitivity analysis demonstrates the benefit of using more thresholds when indicator kriging is implemented with a linear interpolation model, in particular for variables with positively skewed histograms. PMID:20161335
Goovaerts, P
2009-06-01
Indicator kriging provides a flexible interpolation approach that is well suited for datasets where: 1) many observations are below the detection limit, 2) the histogram is strongly skewed, or 3) specific classes of attribute values are better connected in space than others (e.g. low pollutant concentrations). To apply indicator kriging at its full potential requires, however, the tedious inference and modeling of multiple indicator semivariograms, as well as the post-processing of the results to retrieve attribute estimates and associated measures of uncertainty. This paper presents a computer code that performs automatically the following tasks: selection of thresholds for binary coding of continuous data, computation and modeling of indicator semivariograms, modeling of probability distributions at unmonitored locations (regular or irregular grids), and estimation of the mean and variance of these distributions. The program also offers tools for quantifying the goodness of the model of uncertainty within a cross-validation and jack-knife frameworks. The different functionalities are illustrated using heavy metal concentrations from the well-known soil Jura dataset. A sensitivity analysis demonstrates the benefit of using more thresholds when indicator kriging is implemented with a linear interpolation model, in particular for variables with positively skewed histograms.
Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn
2015-01-01
-stone biogeographic models, but not a simple 1-step trans-Atlantic model. Plots of the cumulative pairwise sequence difference curves among seals in each of the four populations provide continuous proxies for phylogenetic diversification within each. Non-parametric Kolmogorov-Smirnov (K-S) tests of maximum pairwise differences between these curves indicates that the Greenland Sea population has a markedly younger phylogenetic structure than either the White Sea population or the two Northwest Atlantic populations, which are of intermediate age and homogeneous structure. The Monte Carlo and K-S assessments provide sensitive quantitative tests of within-species mitogenomic phylogeography. This is the first study to indicate that the White Sea and Greenland Sea populations have different population genetic histories. The analysis supports the hypothesis that Harp Seals comprises three genetically distinguishable breeding populations, in the White Sea, Greenland Sea, and Northwest Atlantic. Implications for an ice-dependent species during ongoing climate change are discussed.
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
Recursive Definitions of Partial Functions and Their Computations
1972-03-01
allows ist fa iliir limplifIcetion rules, such as: fa tor the iipquentiel ’it-then- elee ’ connective! ’ :J_ 1 then A...B If »ic Now, if*then* elee ’ only has one x-sct in g, namelv ;1,’j. This means intuitively that computing in
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.H.
1980-01-01
Use of previously codes and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main programs. The probability distributions provided include the beta, chisquare, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F tests. Other mathematical functions include the Bessel function I (subzero), gamma and log-gamma functions, error functions and exponential integral. Auxiliary services include sorting and printer plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
Computer routines for probability distributions, random numbers, and related functions
Kirby, W.
1983-01-01
Use of previously coded and tested subroutines simplifies and speeds up program development and testing. This report presents routines that can be used to calculate various probability distributions and other functions of importance in statistical hydrology. The routines are designed as general-purpose Fortran subroutines and functions to be called from user-written main progress. The probability distributions provided include the beta, chi-square, gamma, Gaussian (normal), Pearson Type III (tables and approximation), and Weibull. Also provided are the distributions of the Grubbs-Beck outlier test, Kolmogorov 's and Smirnov 's D, Student 's t, noncentral t (approximate), and Snedecor F. Other mathematical functions include the Bessel function, I sub o, gamma and log-gamma functions, error functions, and exponential integral. Auxiliary services include sorting and printer-plotting. Random number generators for uniform and normal numbers are provided and may be used with some of the above routines to generate numbers from other distributions. (USGS)
NASA Astrophysics Data System (ADS)
Francisco, E.; Pendás, A. Martín; Blanco, M. A.
2008-04-01
Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer
Introduction to Classical Density Functional Theory by a Computational Experiment
ERIC Educational Resources Information Center
Jeanmairet, Guillaume; Levy, Nicolas; Levesque, Maximilien; Borgis, Daniel
2014-01-01
We propose an in silico experiment to introduce the classical density functional theory (cDFT). Density functional theories, whether quantum or classical, rely on abstract concepts that are nonintuitive; however, they are at the heart of powerful tools and active fields of research in both physics and chemistry. They led to the 1998 Nobel Prize in…
Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.
ERIC Educational Resources Information Center
Snow, Donald R.
1989-01-01
Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)
Multiple multiresolution representation of functions and calculus for fast computation
Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A
2010-01-01
We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.
Evaluation of computing systems using functionals of a Stochastic process
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Wu, L. T.
1980-01-01
An intermediate model was used to represent the probabilistic nature of a total system at a level which is higher than the base model and thus closer to the performance variable. A class of intermediate models, which are generally referred to as functionals of a Markov process, were considered. A closed form solution of performability for the case where performance is identified with the minimum value of a functional was developed.
Computational strategies for the design of new enzymatic functions.
Świderek, K; Tuñón, I; Moliner, V; Bertran, J
2015-09-15
In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and Retro-Aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies.
COMPUTATIONAL STRATEGIES FOR THE DESIGN OF NEW ENZYMATIC FUNCTIONS
Świderek, K; Tuñón, I.; Moliner, V.; Bertran, J.
2015-01-01
In this contribution, recent developments in the design of biocatalysts are reviewed with particular emphasis in the de novo strategy. Studies based on three different reactions, Kemp elimination, Diels-Alder and retro-aldolase, are used to illustrate different success achieved during the last years. Finally, a section is devoted to the particular case of designed metalloenzymes. As a general conclusion, the interplay between new and more sophisticated engineering protocols and computational methods, based on molecular dynamics simulations with Quantum Mechanics/Molecular Mechanics potentials and fully flexible models, seems to constitute the bed rock for present and future successful design strategies. PMID:25797438
A Functional Level Preprocessor for Computer Aided Digital Design.
1980-12-01
the parsing of ucer input, is based on that for the computer language, PASCAL [J2,1J. The procedure is tle author’s original design Each line of input...NIKLAUS WIR~iN. PASCAL -USER:I MANUAL AmD REPORT. NEW YORK, NY: SPRINGER-VERLAG 1978 Li LANCAST17R, DOIN. CMOS CoORBOLK(A. IND)IANAPOLIS, IND): HOWAI(D...34flGS 0151, OtTAll me:;genera ted by SISL, duri -ne iti; last run. Each message is of the foriiat: SutiWur Uk ;L ATLNG M1:SSA(;lE-, )’URIAT NUMBELR, and
Bread dough rheology: Computing with a damage function model
NASA Astrophysics Data System (ADS)
Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong
2015-01-01
We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.
Accurate Computation of Divided Differences of the Exponential Function,
1983-06-01
differences are not for arbitrary smooth functions f but for well known analytic functions such as exp. sin and cos. Thus we can exploit their properties in...have a bad name in practice. However in a number of applications the functional form of f is known (e.g. exp) and can be exploited to obtain accurate...n do X =s(1) s(1)=d(i) For j=2.....-1 do11=t, (j) z=Y next j next i SS7 . (Shift back and stop.] ,-tt+77. d(i).-e"d(i), s(i-1)’e~ s(i-i) for i=2
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps
2016-01-01
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented. PMID:26854874
Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J; Sayre, Kirk D; Ankrum, Scott
2013-01-01
Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and security vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.
Determining Roots of Complex Functions with Computer Graphics.
ERIC Educational Resources Information Center
Skala, Helen; Kowalski, Robert
1990-01-01
Describes a graphical method of approximating roots of complex functions that uses the multicolor display capabilities of microcomputers. Theorems and proofs are presented that illustrate the method, and uses in undergraduate mathematics courses are suggested, including numerical analysis and complex variables. (six references) (LRW)
A general computational framework for modeling cellular structure and function.
Schaff, J; Fink, C C; Slepchenko, B; Carson, J H; Loew, L M
1997-01-01
The "Virtual Cell" provides a general system for testing cell biological mechanisms and creates a framework for encapsulating the burgeoning knowledge base comprising the distribution and dynamics of intracellular biochemical processes. It approaches the problem by associating biochemical and electrophysiological data describing individual reactions with experimental microscopic image data describing their subcellular localizations. Individual processes are collected within a physical and computational infrastructure that accommodates any molecular mechanism expressible as rate equations or membrane fluxes. An illustration of the method is provided by a dynamic simulation of IP3-mediated Ca2+ release from endoplasmic reticulum in a neuronal cell. The results can be directly compared to experimental observations and provide insight into the role of experimentally inaccessible components of the overall mechanism. Images FIGURE 1 FIGURE 2 FIGURE 4 FIGURE 5 PMID:9284281
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.
ERIC Educational Resources Information Center
Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel
2017-01-01
The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…
Frequency domain transfer function identification using the computer program SYSFIT
Trudnowski, D.J.
1992-12-01
Because the primary application of SYSFIT for BPA involves studying power system dynamics, this investigation was geared toward simulating the effects that might be encountered in studying electromechanical oscillations in power systems. Although the intended focus of this work is power system oscillations, the studies are sufficiently genetic that the results can be applied to many types of oscillatory systems with closely-spaced modes. In general, there are two possible ways of solving the optimization problem. One is to use a least-squares optimization function and to write the system in such a form that the problem becomes one of linear least-squares. The solution can then be obtained using a standard least-squares technique. The other method involves using a search method to obtain the optimal model. This method allows considerably more freedom in forming the optimization function and model, but it requires an initial guess of the system parameters. SYSFIT employs this second approach. Detailed investigations were conducted into three main areas: (1) fitting to exact frequency response data of a linear system; (2) fitting to the discrete Fourier transformation of noisy data; and (3) fitting to multi-path systems. The first area consisted of investigating the effects of alternative optimization cost function options; using different optimization search methods; incorrect model order, missing response data; closely-spaced poles; and closely-spaced pole-zero pairs. Within the second area, different noise colorations and levels were studied. In the third area, methods were investigated for improving fitting results by incorporating more than one system path. The following is a list of guidelines and properties developed from the study for fitting a transfer function to the frequency response of a system using optimization search methods.
A computational interactome and functional annotation for the human proteome
Garzón, José Ignacio; Deng, Lei; Murray, Diana; Shapira, Sagi; Petrey, Donald; Honig, Barry
2016-01-01
We present a database, PrePPI (Predicting Protein-Protein Interactions), of more than 1.35 million predicted protein-protein interactions (PPIs). Of these at least 127,000 are expected to constitute direct physical interactions although the actual number may be much larger (~500,000). The current PrePPI, which contains predicted interactions for about 85% of the human proteome, is related to an earlier version but is based on additional sources of interaction evidence and is far larger in scope. The use of structural relationships allows PrePPI to infer numerous previously unreported interactions. PrePPI has been subjected to a series of validation tests including reproducing known interactions, recapitulating multi-protein complexes, analysis of disease associated SNPs, and identifying functional relationships between interacting proteins. We show, using Gene Set Enrichment Analysis (GSEA), that predicted interaction partners can be used to annotate a protein’s function. We provide annotations for most human proteins, including many annotated as having unknown function. DOI: http://dx.doi.org/10.7554/eLife.18715.001 PMID:27770567
Toward high-resolution computational design of helical membrane protein structure and function
Barth, Patrick; Senes, Alessandro
2016-01-01
The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
Technology Transfer Automated Retrieval System (TEKTRAN)
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry
ERIC Educational Resources Information Center
Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan
2013-01-01
A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…
Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions
ERIC Educational Resources Information Center
Moreira, M. V.; Basilio, J. C.
2012-01-01
All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…
Effects of Computer versus Paper Administration of an Adult Functional Writing Assessment
ERIC Educational Resources Information Center
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young
2011-01-01
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Computer programs for calculation of thermodynamic functions of mixing in crystalline solutions
NASA Technical Reports Server (NTRS)
Comella, P. A.; Saxena, S. K.
1972-01-01
The computer programs Beta, GEGIM, REGSOL1, REGSOL2, Matrix, and Quasi are presented. The programs are useful in various calculations for the thermodynamic functions of mixing and the activity-composition relations in rock forming minerals.
Density functional computations for inner-shell excitation spectroscopy
NASA Astrophysics Data System (ADS)
Hu, Ching-Han; Chong, Delano P.
1996-11-01
The 1 s → π ∗ inner-shell excitation spectra of seven molecules have been studied using density functional theory along with the unrestricted generalized transition state (uGTS) approach. The exchange-correlation potential is based on a combined functional of Becke's exchange (B88) and Perdew's correlation (P86). A scaling procedure based on Clementi and Raimondi's rules for atomic screening is applied to the cc-pVTZ basis set of atoms where a partial core-hole is created in the uGTS calculations. The average absolute deviation between our predicted 1 s → π ∗ excitations eneergies and experimental values is only 0.16 eV. Singlet-triplet splittings of C 1 s → π ∗ transitions of CO, C 2H 2, C 2H 4, and C 6H 6 also agree with experimental observations. The average absolute deviation of our predicted core-electron binding energies and term values is 0.23 and 0.29 eV, respectively.
Computational characterization of sodium selenite using density functional theory.
Barraza-Jiménez, Diana; Flores-Hidalgo, Manuel Alberto; Galvan, Donald H; Sánchez, Esteban; Glossman-Mitnik, Daniel
2011-04-01
In this theoretical study we used density functional theory to calculate the molecular and crystalline structures of sodium selenite. Our structural results were compared with experimental data. From the molecular structure we determined the ionization potential, electronic affinity, and global reactivity parameters like electronegativity, hardness, softness and global electrophilic index. A significant difference in the IP and EA values was observed, and this difference was dependent on the calculation method used (employing either vertical or adiabatic energies). Thus, values obtained for the electrophilic index (2.186 eV from vertical energies and 2.188 eV from adiabatic energies) were not significantly different. Selectivity was calculated using the Fukui functions. Since the Mulliken charge study predicted a negative value, it is recommended that AIM should be used in selectivity characterization. It was evident from the selectivity index that sodium atoms are the most sensitive sites to nucleophilic attack. The results obtained in this work provide data that will aid the characterization of compounds used in crop biofortification.
Computation of Schenberg response function by using finite element modelling
NASA Astrophysics Data System (ADS)
Frajuca, C.; Bortoli, F. S.; Magalhaes, N. S.
2016-05-01
Schenberg is a detector of gravitational waves resonant mass type, with a central frequency of operation of 3200 Hz. Transducers located on the surface of the resonating sphere, according to a distribution half-dodecahedron, are used to monitor a strain amplitude. The development of mechanical impedance matchers that act by increasing the coupling of the transducers with the sphere is a major challenge because of the high frequency and small in size. The objective of this work is to study the Schenberg response function obtained by finite element modeling (FEM). Finnaly, the result is compared with the result of the simplified model for mass spring type system modeling verifying if that is suitable for the determination of sensitivity detector, as the conclusion the both modeling give the same results.
A Computer Program for the Computation of Running Gear Temperatures Using Green's Function
NASA Technical Reports Server (NTRS)
Koshigoe, S.; Murdock, J. W.; Akin, L. S.; Townsend, D. P.
1996-01-01
A new technique has been developed to study two dimensional heat transfer problems in gears. This technique consists of transforming the heat equation into a line integral equation with the use of Green's theorem. The equation is then expressed in terms of eigenfunctions that satisfy the Helmholtz equation, and their corresponding eigenvalues for an arbitrarily shaped region of interest. The eigenfunction are obtalned by solving an intergral equation. Once the eigenfunctions are found, the temperature is expanded in terms of the eigenfunctions with unknown time dependent coefficients that can be solved by using Runge Kutta methods. The time integration is extremely efficient. Therefore, any changes in the time dependent coefficients or source terms in the boundary conditions do not impose a great computational burden on the user. The method is demonstrated by applying it to a sample gear tooth. Temperature histories at representative surface locatons are given.
NASA Astrophysics Data System (ADS)
Borgis, Daniel; Assaraf, Roland; Rotenberg, Benjamin; Vuilleumier, Rodolphe
2013-12-01
No fancy statistical objects here, we go back to the computation of one of the most basic and fundamental quantities in the statistical mechanics of fluids, namely the pair distribution functions. Those functions are usually computed in molecular simulations by using histogram techniques. We show here that they can be estimated using a global information on the instantaneous forces acting on the particles, and that this leads to a reduced variance compared to the standard histogram estimators. The technique is extended successfully to the computation of three-dimensional solvent densities around tagged molecular solutes, quantities that are noisy and very long to converge, using histograms.
Astrocytes, Synapses and Brain Function: A Computational Approach
NASA Astrophysics Data System (ADS)
Nadkarni, Suhita
2006-03-01
Modulation of synaptic reliability is one of the leading mechanisms involved in long- term potentiation (LTP) and long-term depression (LTD) and therefore has implications in information processing in the brain. A recently discovered mechanism for modulating synaptic reliability critically involves recruitments of astrocytes - star- shaped cells that outnumber the neurons in most parts of the central nervous system. Astrocytes until recently were thought to be subordinate cells merely participating in supporting neuronal functions. New evidence, however, made available by advances in imaging technology has changed the way we envision the role of these cells in synaptic transmission and as modulator of neuronal excitability. We put forward a novel mathematical framework based on the biophysics of the bidirectional neuron-astrocyte interactions that quantitatively accounts for two distinct experimental manifestation of recruitment of astrocytes in synaptic transmission: a) transformation of a low fidelity synapse transforms into a high fidelity synapse and b) enhanced postsynaptic spontaneous currents when astrocytes are activated. Such a framework is not only useful for modeling neuronal dynamics in a realistic environment but also provides a conceptual basis for interpreting experiments. Based on this modeling framework, we explore the role of astrocytes for neuronal network behavior such as synchrony and correlations and compare with experimental data from cultured networks.
Fast computation of functional networks from fMRI activity: a multi-platform comparison
NASA Astrophysics Data System (ADS)
Rao, A. Ravishankar; Bordawekar, Rajesh; Cecchi, Guillermo
2011-03-01
The recent deployment of functional networks to analyze fMRI images has been very promising. In this method, the spatio-temporal fMRI data is converted to a graph-based representation, where the nodes are voxels and edges indicate the relationship between the nodes, such as the strength of correlation or causality. Graph-theoretic measures can then be used to compare different fMRI scans. However, there is a significant computational bottleneck, as the computation of functional networks with directed links takes several hours on conventional machines with single CPUs. The study in this paper shows that a GPU can be advantageously used to accelerate the computation, such that the network computation takes a few minutes. Though GPUs have been used for the purposes of displaying fMRI images, their use in computing functional networks is novel. We describe specific techniques such as load balancing, and the use of a large number of threads to achieve the desired speedup. Our experience in utilizing the GPU for functional network computations should prove useful to the scientific community investigating fMRI as GPUs are a low-cost platform for addressing the computational bottleneck.
A mesh-decoupled height function method for computing interface curvature
NASA Astrophysics Data System (ADS)
Owkes, Mark; Desjardins, Olivier
2015-01-01
In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
Peak functions for modeling high resolution soil profile data
Technology Transfer Automated Retrieval System (TEKTRAN)
Parametric and non-parametric depth functions have been used to estimate continuous soil profile properties. However, some soil properties, such as those seen in weathered loess, have complex peaked and anisotropic depth distributions. These distributions are poorly handled by common parametric func...
Computation of fractional integrals via functions of hypergeometric and Bessel type
NASA Astrophysics Data System (ADS)
Kilbas, A. A.; Trujillo, J. J.
2000-06-01
The paper is devoted to computation of the fractional integrals of power exponential functions. It is considered a function [lambda][gamma],[sigma]([beta])(z) defined bywith positive [beta] and complex [gamma], [sigma] and z such that Re([gamma])>(1/[beta])-1 and Re(z)>0. The special cases are discussed when [lambda][gamma],[sigma]([beta])(z) is expressed in terms of the Tricomi confluent hypergeometric function [Psi](a,c;x) and of modified Bessel function of the third kind K[gamma](x). Representations of these functions via fractional integrals are proved. The results obtained apply to compute fractional integrals of power exponential functions in terms of [lambda][gamma],[sigma]([beta])(x), [Psi](a,c;x) and K[gamma](x). Examples are considered.
Locating and computing in parallel all the simple roots of special functions using PVM
NASA Astrophysics Data System (ADS)
Plagianakos, V. P.; Nousis, N. K.; Vrahatis, M. N.
2001-08-01
An algorithm is proposed for locating and computing in parallel and with certainty all the simple roots of any twice continuously differentiable function in any specific interval. To compute with certainty all the roots, the proposed method is heavily based on the knowledge of the total number of roots within the given interval. To obtain this information we use results from topological degree theory and, in particular, the Kronecker-Picard approach. This theory gives a formula for the computation of the total number of roots of a system of equations within a given region, which can be computed in parallel. With this tool in hand, we construct a parallel procedure for the localization and isolation of all the roots by dividing the given region successively and applying the above formula to these subregions until the final domains contain at the most one root. The subregions with no roots are discarded, while for the rest a modification of the well-known bisection method is employed for the computation of the contained root. The new aspect of the present contribution is that the computation of the total number of zeros using the Kronecker-Picard integral as well as the localization and computation of all the roots is performed in parallel using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of varied architectures. The proposed algorithm has large granularity and low synchronization, and is robust. It has been implemented and tested and our experience is that it can massively compute with certainty all the roots in a certain interval. Performance information from massive computations related to a recently proposed conjecture due to Elbert (this issue, J. Comput. Appl. Math. 133 (2001) 65-83) is reported.
Druskin, V.; Lee, Ping; Knizhnerman, L.
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
NASA Technical Reports Server (NTRS)
King, H. F.; Komornicki, A.
1986-01-01
Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.
Liu, Jia; Yan, Zhengzheng; Pu, Yuehua; Shiu, Wen-Shin; Wu, Jianhuang; Chen, Rongliang; Leng, Xinyi; Qin, Haiqiang; Liu, Xin; Jia, Baixue; Song, Ligang; Wang, Yilong; Miao, Zhongrong; Wang, Yongjun; Liu, Liping; Cai, Xiao-Chuan
2016-10-04
The fractional pressure ratio is introduced to quantitatively assess the hemodynamic significance of severe intracranial stenosis. A computational fluid dynamics-based method is proposed to non-invasively compute the FPRCFD and compared against fractional pressure ratio measured by an invasive technique. Eleven patients with severe intracranial stenosis considered for endovascular intervention were recruited and an invasive procedure was performed to measure the distal and the aortic pressure (Pd and Pa). The fractional pressure ratio was calculated as [Formula: see text] The computed tomography angiography was used to reconstruct three-dimensional (3D) arteries for each patient. Cerebral hemodynamics was then computed for the arteries using a mathematical model governed by Navier-Stokes equations and with the outflow conditions imposed by a model of distal resistance and compliance. The non-invasive [Formula: see text], [Formula: see text], and FPRCFD were then obtained from the computational fluid dynamics calculation using a 16-core parallel computer. The invasive and non-invasive parameters were tested by statistical analysis. For this group of patients, the computational fluid dynamics method achieved comparable results with the invasive measurements. The fractional pressure ratio and FPRCFD are very close and highly correlated, but not linearly proportional, with the percentage of stenosis. The proposed computational fluid dynamics method can potentially be useful in assessing the functional alteration of cerebral stenosis.
Computation of determinant expansion coefficients within the graphically contracted function method.
Gidofalvi, Gergely; Shepard, Ron
2009-11-30
Most electronic structure methods express the wavefunction as an expansion of N-electron basis functions that are chosen to be either Slater determinants or configuration state functions. Although the expansion coefficient of a single determinant may be readily computed from configuration state function coefficients for small wavefunction expansions, traditional algorithms are impractical for systems with a large number of electrons and spatial orbitals. In this work, we describe an efficient algorithm for the evaluation of a single determinant expansion coefficient for wavefunctions expanded as a linear combination of graphically contracted functions. Each graphically contracted function has significant multiconfigurational character and depends on a relatively small number of variational parameters called arc factors. Because the graphically contracted function approach expresses the configuration state function coefficients as products of arc factors, a determinant expansion coefficient may be computed recursively more efficiently than with traditional configuration interaction methods. Although the cost of computing determinant coefficients scales exponentially with the number of spatial orbitals for traditional methods, the algorithm presented here exploits two levels of recursion and scales polynomially with system size. Hence, as demonstrated through applications to systems with hundreds of electrons and orbitals, it may readily be applied to very large systems.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Stehlin, P.; Brogan, F. A.
1981-01-01
A method for improving the efficiency of nonlinear structural analysis by the use of global displacement functions is presented. The computer programs include options to define the global functions as input or let the program automatically select and update these functions. The program was applied to a number of structures: (1) 'pear-shaped cylinder' in compression, (2) bending of a long cylinder, (3) spherical shell subjected to point force, (4) panel with initial imperfections, (5) cylinder with cutouts. The sample cases indicate the usefulness of the procedure in the solution of nonlinear structural shell problems by the finite element method. It is concluded that the use of global functions for extrapolation will lead to savings in computer time.
Computationally efficient algorithms for the two-dimensional Kolmogorov Smirnov test
NASA Astrophysics Data System (ADS)
Lopes, R. H. C.; Hobson, P. R.; Reid, I. D.
2008-07-01
Goodness-of-fit statistics measure the compatibility of random samples against some theoretical or reference probability distribution function. The classical one-dimensional Kolmogorov-Smirnov test is a non-parametric statistic for comparing two empirical distributions which defines the largest absolute difference between the two cumulative distribution functions as a measure of disagreement. Adapting this test to more than one dimension is a challenge because there are 2d-1 independent ways of ordering a cumulative distribution function in d dimensions. We discuss Peacock's version of the Kolmogorov-Smirnov test for two-dimensional data sets which computes the differences between cumulative distribution functions in 4n2 quadrants. We also examine Fasano and Franceschini's variation of Peacock's test, Cooke's algorithm for Peacock's test, and ROOT's version of the two-dimensional Kolmogorov-Smirnov test. We establish a lower-bound limit on the work for computing Peacock's test of Ω(n2lgn), introducing optimal algorithms for both this and Fasano and Franceschini's test, and show that Cooke's algorithm is not a faithful implementation of Peacock's test. We also discuss and evaluate parallel algorithms for Peacock's test.
Networks of spiking neurons that compute linear functions using action potential timing
NASA Astrophysics Data System (ADS)
Ruf, Berthold
1999-03-01
For fast neural computations within the brain it is very likely that the timing of single firing events is relevant. Recently Maass has shown that under certain weak assumptions a weighted sum can be computed in temporal coding by leaky integrate-and-fire neurons. This construction can be extended to approximate arbitrary functions. In comparison to integrate-and-fire neurons there are several sources in biologically more realistic neurons for additional nonlinear effects like e.g. the spatial and temporal interaction of postsynaptic potentials or voltage-gated ion channels at the soma. Here we demonstrate with the help of computer simulations using GENESIS that despite of these nonlinearities such neurons can compute linear functions in a natural and straightforward way based on the main principles of the construction given by Maass. One only has to assume that a neuron receives all its inputs in a time interval of approximately the length of the rising segment of its excitatory postsynaptic potentials. We also show that under certain assumptions there exists within this construction some type of activation function being computed by such neurons. Finally we demonstrate that on the basis of these results it is possible to realize in a simple way pattern analysis with spiking neurons. It allows the analysis of a mixture of several learned patterns within a few milliseconds.
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
MRIVIEW: An interactive computational tool for investigation of brain structure and function
Ranken, D.; George, J.
1993-12-31
MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.
A fast computation method for MUSIC spectrum function based on circular arrays
NASA Astrophysics Data System (ADS)
Du, Zhengdong; Wei, Ping
2015-02-01
The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.
Implementation of linear-scaling plane wave density functional theory on parallel computers
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Haynes, Peter D.; Mostofi, Arash A.; Payne, Mike C.
We describe the algorithms we have developed for linear-scaling plane wave density functional calculations on parallel computers as implemented in the onetep program. We outline how onetep achieves plane wave accuracy with a computational cost which increases only linearly with the number of atoms by optimising directly the single-particle density matrix expressed in a psinc basis set. We describe in detail the novel algorithms we have developed for computing with the psinc basis set the quantities needed in the evaluation and optimisation of the total energy within our approach. For our parallel computations we use the general Message Passing Interface (MPI) library of subroutines to exchange data between processors. Accordingly, we have developed efficient schemes for distributing data and computational load to processors in a balanced manner. We describe these schemes in detail and in relation to our algorithms for computations with a psinc basis. Results of tests on different materials show that onetep is an efficient parallel code that should be able to take advantage of a wide range of parallel computer architectures.
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
ERIC Educational Resources Information Center
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.; Blemker, Silvia S.
2015-01-01
Purpose: This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method: We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy…
Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Li, Johnson
2013-01-01
The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…
The nonverbal communication functions of emoticons in computer-mediated communication.
Lo, Shao-Kang
2008-10-01
Most past studies assume that computer-mediated communication (CMC) lacks nonverbal communication cues. However, Internet users have devised and learned to use emoticons to assist their communications. This study examined emoticons as a communication tool that, although presented as verbal cues, perform nonverbal communication functions. We therefore termed emoticons quasi-nonverbal cues.
Maple (Computer Algebra System) in Teaching Pre-Calculus: Example of Absolute Value Function
ERIC Educational Resources Information Center
Tuluk, Güler
2014-01-01
Modules in Computer Algebra Systems (CAS) make Mathematics interesting and easy to understand. The present study focused on the implementation of the algebraic, tabular (numerical), and graphical approaches used for the construction of the concept of absolute value function in teaching mathematical content knowledge along with Maple 9. The study…
ERIC Educational Resources Information Center
Hetzroni, Orit E.; Tannous, Juman
2004-01-01
This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant…
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
PuFT: Computer-Assisted Program for Pulmonary Function Tests.
ERIC Educational Resources Information Center
Boyle, Joseph
1983-01-01
PuFT computer program (Microsoft Basic) is designed to help in understanding/interpreting pulmonary function tests (PFT). The program provides predicted values for common PFT after entry of patient data, calculates/plots graph simulating force vital capacity (FVC), and allows observations of effects on predicted PFT values and FVC curve when…
Non-Parametric Model Drift Detection
2016-07-01
Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 38 i Table of Contents LIST OF FIGURES... learned by Corex on mun dataset ............................................... 21 B. Topics learned by Corex on mun dataset...Introduction Most machine learning methods operate under the assumption that the training and the test data are sampled from the same distribution
A new Fortran 90 program to compute regular and irregular associated Legendre functions
NASA Astrophysics Data System (ADS)
Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus
2010-12-01
We present a modern Fortran 90 code to compute the regular Plm(x) and irregular Qlm(x) associated Legendre functions for all x∈(-1,+1) (on the cut) and |x|>1 and integer degree ( l) and order ( m). The code applies either forward or backward recursion in ( l) and ( m) in the stable direction, starting with analytically known values for forward recursion and considering both a Wronskian based and a modified Miller's method for backward recursion. While some Fortran 77 codes existed for computing the functions off the cut, no Fortran 90 code was available for accurately computing the functions for all real values of x different from x=±1 where the irregular functions are not defined. Program summaryProgram title: Associated Legendre Functions Catalogue identifier: AEHE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6722 No. of bytes in distributed program, including test data, etc.: 310 210 Distribution format: tar.gz Programming language: Fortran 90 Computer: Linux systems Operating system: Linux RAM: bytes Classification: 4.7 Nature of problem: Compute the regular and irregular associated Legendre functions for integer values of the degree and order and for all real arguments. The computation of the interaction of two electrons, 1/|r-r|, in prolate spheroidal coordinates is used as one example where these functions are required for all values of the argument and we are able to easily compare the series expansion in associated Legendre functions and the exact value. Solution method: The code evaluates the regular and irregular associated Legendre functions using forward recursion when |x|<1 starting the recursion with the analytically known values of the first two members of the sequence. For values of
On computation and use of Fourier coefficients for associated Legendre functions
NASA Astrophysics Data System (ADS)
Gruber, Christian; Abrykosov, Oleh
2016-06-01
The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.
Computer generation of symbolic network functions - A new theory and implementation.
NASA Technical Reports Server (NTRS)
Alderson, G. E.; Lin, P.-M.
1972-01-01
A new method is presented for obtaining network functions in which some, none, or all of the network elements are represented by symbolic parameters (i.e., symbolic network functions). Unlike the topological tree enumeration or signal flow graph methods generally used to derive symbolic network functions, the proposed procedure employs fast, efficient, numerical-type algorithms to determine the contribution of those network branches that are not represented by symbolic parameters. A computer program called NAPPE (for Network Analysis Program using Parameter Extractions) and incorporating all of the concepts discussed has been written. Several examples illustrating the usefulness and efficiency of NAPPE are presented.
Efficient algorithm for computing exact partition functions of lattice polymer models
NASA Astrophysics Data System (ADS)
Hsieh, Yu-Hsin; Chen, Chi-Ning; Hu, Chin-Kun
2016-12-01
Polymers are important macromolecules in many physical, chemical, biological and industrial problems. Studies on simple lattice polymer models are very helpful for understanding behaviors of polymers. We develop an efficient algorithm for computing exact partition functions of lattice polymer models, and we use this algorithm and personal computers to obtain exact partition functions of the interacting self-avoiding walks with N monomers on the simple cubic lattice up to N = 28 and on the square lattice up to N = 40. Our algorithm can be extended to study other lattice polymer models, such as the HP model for protein folding and the charged HP model for protein aggregation. It also provides references for checking accuracy of numerical partition functions obtained by simulations.
Computing Green's function of elasticity in a half-plane with impedance boundary condition
NASA Astrophysics Data System (ADS)
Durán, Mario; Godoy, Eduardo; Nédélec, Jean-Claude
2006-12-01
This Note presents an effective and accurate method for numerical calculation of the Green's function G associated with the time harmonic elasticity system in a half-plane, where an impedance boundary condition is considered. The need to compute this function arises when studying wave propagation in underground mining and seismological engineering. To theoretically obtain this Green's function, we have drawn our inspiration from the paper by Durán et al. (2005), where the Green's function for the Helmholtz equation has been computed. The method consists in applying a partial Fourier transform, which allows an explicit calculation of the so-called spectral Green's function. In order to compute its inverse Fourier transform, we separate Gˆ as a sum of two terms. The first is associated with the whole plane, whereas the second takes into account the half-plane and the boundary conditions. The first term corresponds to the Green's function of the well known time-harmonic elasticity system in R (cf. J. Dompierre, Thesis). The second term is separated as a sum of three terms, where two of them contain singularities in the spectral variable (pseudo-poles and poles) and the other is regular and decreasing at infinity. The inverse Fourier transform of the singular terms are analytically computed, whereas the regular one is numerically obtained via an FFT algorithm. We present a numerical result. Moreover, we show that, under some conditions, a fourth additional slowness appears and which could produce a new surface wave. To cite this article: M. Durán et al., C. R. Mecanique 334 (2006).
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
Wright-Walters, Maxine; Volz, Conrad; Talbott, Evelyn; Davis, Devra
2011-01-15
An aquatic hazard assessment establishes a derived predicted no effect concentration (PNEC) below which it is assumed that aquatic organisms will not suffer adverse effects from exposure to a chemical. An aquatic hazard assessment of the endocrine disruptor Bisphenol A [BPA; 2, 2-bis (4-hydroxyphenyl) propane] was conducted using a weight of evidence approach, using the ecotoxicological endpoints of survival, growth and development and reproduction. New evidence has emerged that suggests that the aquatic system may not be sufficiently protected from adverse effects of BPA exposure at the current PNEC value of 100 μg/L. It is with this background that; 1) An aquatic hazard assessment for BPA using a weight of evidence approach, was conducted, 2) A PNEC value was derived using a non parametric hazardous concentration for 5% of the specie (HC(5)) approach and, 3) The derived BPA hazard assessment values were compared to aquatic environmental concentrations for BPA to determine, sufficient protectiveness from BPA exposure for aquatic species. A total of 61 studies yielded 94 no observed effect concentration (NOEC) and a toxicity dataset, which suggests that the aquatic effects of mortality, growth and development and reproduction are most likely to occur between the concentrations of 0.0483 μg/L and 2280 μg/L. This finding is within the range for aquatic adverse estrogenic effects reported in the literature. A PNEC of 0.06 μg/L was calculated. The 95% confidence interval was found to be (0.02, 3.40) μg/L. Thus, using the weight of evidence approach based on repeated measurements of these endpoints, the results indicate that currently observed BPA concentrations in surface waters exceed this newly derived PNEC value of 0.06 μg/L. This indicates that some aquatic receptors may be at risk for adverse effects on survival, growth and development and reproduction from BPA exposure at environmentally relevant concentrations.
Redox Biology: Computational Approaches to the Investigation of Functional Cysteine Residues
Marino, Stefano M.
2011-01-01
Abstract Cysteine (Cys) residues serve many functions, such as catalysis, stabilization of protein structure through disulfides, metal binding, and regulation of protein function. Cys residues are also subject to numerous post-translational modifications. In recent years, various computational tools aiming at classifying and predicting different functional categories of Cys have been developed, particularly for structural and catalytic Cys. On the other hand, given complexity of the subject, bioinformatics approaches have been less successful for the investigation of regulatory Cys sites. In this review, we introduce different functional categories of Cys residues. For each category, an overview of state-of-the-art bioinformatics methods and tools is provided, along with examples of successful applications and potential limitations associated with each approach. Finally, we discuss Cys-based redox switches, which modify the view of distinct functional categories of Cys in proteins. Antioxid. Redox Signal. 15, 135–146. PMID:20812876
Mandonnet, Emmanuel; Duffau, Hugues
2014-01-01
Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved.
NASA Astrophysics Data System (ADS)
Venturi, Daniele
2016-11-01
The fundamental importance of functional differential equations has been recognized in many areas of mathematical physics, such as fluid dynamics, quantum field theory and statistical physics. For example, in the context of fluid dynamics, the Hopf characteristic functional equation was deemed by Monin and Yaglom to be "the most compact formulation of the turbulence problem", which is the problem of determining the statistical properties of the velocity and pressure fields of Navier-Stokes equations given statistical information on the initial state. However, no effective numerical method has yet been developed to compute the solution to functional differential equations. In this talk I will provide a new perspective on this general problem, and discuss recent progresses in approximation theory for nonlinear functionals and functional equations. The proposed methods will be demonstrated through various examples.
Storing files in a parallel computing system based on user-specified parser function
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron
2014-10-21
Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.
Gil L, Alejandro; Valiente, Pedro A; Pascutti, Pedro G; Pons, Tirso
2011-01-01
The development of efficient and selective antimalariais remains a challenge for the pharmaceutical industry. The aspartic proteases plasmepsins, whose inhibition leads to parasite death, are classified as targets for the design of potent drugs. Combinatorial synthesis is currently being used to generate inhibitor libraries for these enzymes, and together with computational methodologies have been demonstrated capable for the selection of lead compounds. The high structural flexibility of plasmepsins, revealed by their X-ray structures and molecular dynamics simulations, made even more complicated the prediction of putative binding modes, and therefore, the use of common computational tools, like docking and free-energy calculations. In this review, we revised the computational strategies utilized so far, for the structure-function relationship studies concerning the plasmepsin family, with special focus on the recent advances in the improvement of the linear interaction estimation (LIE) method, which is one of the most successful methodologies in the evaluation of plasmepsin-inhibitor binding affinity.
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.
Response functions for computing absorbed dose to skeletal tissues from photon irradiation.
Eckerman, K F; Bolch, W E; Zankl, M; Petoussi-Henss, N
2007-01-01
The calculation of absorbed dose in skeletal tissues at radiogenic risk has been a difficult problem because the relevant structures cannot be represented in conventional geometric terms nor can they be visualised in the tomographic image data used to define the computational models of the human body. The active marrow, the tissue of concern in leukaemia induction, is present within the spongiosa regions of trabecular bone, whereas the osteoprogenitor cells at risk for bone cancer induction are considered to be within the soft tissues adjacent to the mineral surfaces. The International Commission on Radiological Protection (ICRP) recommends averaging the absorbed energy over the active marrow within the spongiosa and over the soft tissues within 10 microm of the mineral surface for leukaemia and bone cancer induction, respectively. In its forthcoming recommendation, it is expected that the latter guidance will be changed to include soft tissues within 50 microm of the mineral surfaces. To address the computational problems, the skeleton of the proposed ICRP reference computational phantom has been subdivided to identify those voxels associated with cortical shell, spongiosa and the medullary cavity of the long bones. It is further proposed that the Monte Carlo calculations with these phantoms compute the energy deposition in the skeletal target tissues as the product of the particle fluence in the skeletal subdivisions and applicable fluence-to-dose-response functions. This paper outlines the development of such response functions for photons.
Centroids computation and point spread function analysis for reverse Hartmann test
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Linqqin; Zhao, Yuejin
2017-03-01
This paper studies the point spread function (PSF) and centroids computation methods to improve the performance of reverse Hartmann test (RHT) in poor conditions, such as defocus, background noise, etc. In the RHT, we evaluate the PSF in terms of Lommel function and classify it as circle of confusion (CoC) instead of Airy disk. Approximation of a CoC spot with Gaussian or super-Gaussian profile to identify its centroid forms the basis of centroids algorithm. It is also effective for fringe pattern while the segmental fringe is served as a 'spot' with an infinite diameter in one direction. RHT experiments are conducted to test the fitting effects and centroiding performances of the methods with Gaussian and super-Gaussian approximations. The fitting results show that the super-Gaussian obtains more reasonable fitting effects. The super-Gauss orders are only slightly larger than 2 means that the CoC has a similar profile with Airy disk in certain conditions. The results of centroids computation demonstrate that when the signal to noise ratio (SNR) is falling, the centroid computed by super-Gaussian method has a less shift and the shift grows at a slower pace. It implies that the super-Gaussian has a better anti-noise capability in centroid computation.
NASA Astrophysics Data System (ADS)
Roccatano, Danilo
2015-07-01
The monooxygenase P450 BM-3 is a NADPH-dependent fatty acid hydroxylase enzyme isolated from soil bacterium Bacillus megaterium. As a pivotal member of cytochrome P450 superfamily, it has been intensely studied for the comprehension of structure-dynamics-function relationships in this class of enzymes. In addition, due to its peculiar properties, it is also a promising enzyme for biochemical and biomedical applications. However, despite the efforts, the full understanding of the enzyme structure and dynamics is not yet achieved. Computational studies, particularly molecular dynamics (MD) simulations, have importantly contributed to this endeavor by providing new insights at an atomic level regarding the correlations between structure, dynamics, and function of the protein. This topical review summarizes computational studies based on MD simulations of the cytochrome P450 BM-3 and gives an outlook on future directions.
Two algorithms to compute projected correlation functions in molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Carof, Antoine; Vuilleumier, Rodolphe; Rotenberg, Benjamin
2014-03-01
An explicit derivation of the Mori-Zwanzig orthogonal dynamics of observables is presented and leads to two practical algorithms to compute exactly projected observables (e.g., random noise) and projected correlation function (e.g., memory kernel) from a molecular dynamics trajectory. The algorithms are then applied to study the diffusive dynamics of a tagged particle in a Lennard-Jones fluid, the properties of the associated random noise, and a decomposition of the corresponding memory kernel.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
NASA Astrophysics Data System (ADS)
Aspon, Siti Zulaiha; Murid, Ali Hassan Mohamed; Rahmat, Hamisan
2014-07-01
This research is about computing the Green's functions on unbounded doubly connected regions by using the method of boundary integral equation. The method depends on solving an exterior Dirichlet problem. The Dirichlet problem is then solved using a uniquely solvable Fredholm integral equation on the boundary of the region. The kernel of this integral equation is the generalized Neumann kernel. The method for solving this integral equation is by using the Nyström method with trapezoidal rule to discretize it to a linear system. The linear system is then solved by the Gaussian elimination method. Mathematica plots of Green's functions for several test regions are also presented.
Method, systems, and computer program products for implementing function-parallel network firewall
Fulp, Errin W [Winston-Salem, NC; Farley, Ryan J [Winston-Salem, NC
2011-10-11
Methods, systems, and computer program products for providing function-parallel firewalls are disclosed. According to one aspect, a function-parallel firewall includes a first firewall node for filtering received packets using a first portion of a rule set including a plurality of rules. The first portion includes less than all of the rules in the rule set. At least one second firewall node filters packets using a second portion of the rule set. The second portion includes at least one rule in the rule set that is not present in the first portion. The first and second portions together include all of the rules in the rule set.
Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min
2016-12-20
Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.
From machine and tape to structure and function: formulation of a reflexively computing system.
Salzberg, Chris
2006-01-01
The relationship between structure and function is explored via a system of labeled directed graph structures upon which a single elementary read/write rule is applied locally. Boundaries between static (information-carrying) and active (information-processing) objects, imposed by mandate of the rules or physics in earlier models, emerge instead as a result of a structure-function dynamic that is reflexive: objects may operate directly on their own structure. A representation of an arbitrary Turing machine is reproduced in terms of structural constraints by means of a simple mapping from tape squares and machine states to a uniform medium of nodes and links, establishing computation universality. Exploiting flexibility of the formulation, examples of other unconventional "self-computing" structures are demonstrated. A straightforward representation of a kinematic machine system based on the model devised by Laing is also reproduced in detail. Implications of the findings are discussed in terms of their relation to other formal models of computation and construction. It is argued that reflexivity of the structure-function relationship is a critical informational dynamic in biochemical systems, overlooked in previous models but well captured by the proposed formulation.
Computing wave functions of nonlinear Schroedinger equations: A time-independent approach
Chang, S.-L.; Chien, C.-S. Jeng, B.-W.
2007-09-10
We present a novel algorithm for computing the ground-state and excited-state solutions of M-coupled nonlinear Schroedinger equations (MCNLS). First we transform the MCNLS to the stationary state ones by using separation of variables. The energy level of a quantum particle governed by the Schroedinger eigenvalue problem (SEP) is used as an initial guess to computing their counterpart of a nonlinear Schroedinger equation (NLS). We discretize the system via centered difference approximations. A predictor-corrector continuation method is exploited as an iterative method to trace solution curves and surfaces of the MCNLS, where the chemical potentials are treated as continuation parameters. The wave functions can be easily obtained whenever the solution manifolds are numerically traced. The proposed algorithm has the advantage that it is unnecessary to discretize or integrate the partial derivatives of wave functions. Moreover, the wave functions can be computed for any time scale. Numerical results on the ground-state and excited-state solutions are reported, where the physical properties of the system such as isotropic and nonisotropic trapping potentials, mass conservation constraints, and strong and weak repulsive interactions are considered in our numerical experiments.
NASA Astrophysics Data System (ADS)
Betancourt-Benítez, Ricardo; Ning, Ruola; Liu, Shaohua
2009-11-01
Several factors during the scanning process, image reconstruction and geometry of an imaging system, influence the spatial resolution of a computed tomography imaging system. In this work, the spatial resolution of a state of the art flat panel detector-based cone beam computed tomography breast imaging system is evaluated. First, scattering, exposure level, voltage, voxel size, pixel size, back-projection filter, reconstruction algorithm, and number of projections are varied to evaluate their effect on spatial resolution. Second, its uniformity throughout the whole field of view is evaluated as a function of radius along the x-y plane and as a function of z at the center of rotation. The results of the study suggest that the modulation transfer function is mainly influenced by the pixel, back-projection filter, and number of projections used. The evaluation of spatial resolution throughout the field of view also suggests that this imaging system does have a 3-D quasi-isotropic spatial resolution in a cylindrical region of radius equal to 40 mm centered at the axis of rotation. Overall, this study provides a useful tool to determine the optimal parameters for the best possible use of this cone beam computed tomography breast imaging system.
Antibodies: Computer-Aided Prediction of Structure and Design of Function.
Sevy, Alexander M; Meiler, Jens
2014-12-01
With the advent of high-throughput sequencing, and the increased availability of experimental structures of antibodies and antibody-antigen complexes, comes the improvement of computational approaches to predict the structure and design the function of antibodies and antibody-antigen complexes. While antibodies pose formidable challenges for protein structure prediction and design due to their large size and highly flexible loops in the complementarity-determining regions, they also offer exciting opportunities: the central importance of antibodies for human health results in a wealth of structural and sequence information that-as a knowledge base-can drive the modeling algorithms by limiting the conformational and sequence search space to likely regions of success. Further, efficient experimental platforms exist to test predicted antibody structure or designed antibody function, thereby leading to an iterative feedback loop between computation and experiment. We briefly review the history of computer-aided prediction of structure and design of function in the antibody field before we focus on recent methodological developments and the most exciting application examples.
Brennan, Douglas; Schubert, Leah; Diot, Quentin; Castillo, Richard; Castillo, Edward; Guerrero, Thomas; Martel, Mary K.; Linderman, Derek; Gaspar, Laurie E.; Miften, Moyed; Kavanagh, Brian D.; Vinogradskiy, Yevgeniy
2015-06-01
Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients with pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating that 4DCT
Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons
Computer-Based Cognitive Training for Executive Functions after Stroke: A Systematic Review
van de Ven, Renate M.; Murre, Jaap M. J.; Veltman, Dick J.; Schmand, Ben A.
2016-01-01
Background: Stroke commonly results in cognitive impairments in working memory, attention, and executive function, which may be restored with appropriate training programs. Our aim was to systematically review the evidence for computer-based cognitive training of executive dysfunctions. Methods: Studies were included if they concerned adults who had suffered stroke or other types of acquired brain injury, if the intervention was computer training of executive functions, and if the outcome was related to executive functioning. We searched in MEDLINE, PsycINFO, Web of Science, and The Cochrane Library. Study quality was evaluated based on the CONSORT Statement. Treatment effect was evaluated based on differences compared to pre-treatment and/or to a control group. Results: Twenty studies were included. Two were randomized controlled trials that used an active control group. The other studies included multiple baselines, a passive control group, or were uncontrolled. Improvements were observed in tasks similar to the training (near transfer) and in tasks dissimilar to the training (far transfer). However, these effects were not larger in trained than in active control groups. Two studies evaluated neural effects and found changes in both functional and structural connectivity. Most studies suffered from methodological limitations (e.g., lack of an active control group and no adjustment for multiple testing) hampering differentiation of training effects from spontaneous recovery, retest effects, and placebo effects. Conclusions: The positive findings of most studies, including neural changes, warrant continuation of research in this field, but only if its methodological limitations are addressed. PMID:27148007
Using computational fluid dynamics to test functional and ecological hypotheses in fossil taxa
NASA Astrophysics Data System (ADS)
Rahman, Imran
2016-04-01
Reconstructing how ancient organisms moved and fed is a major focus of study in palaeontology. Traditionally, this has been hampered by a lack of objective data on the functional morphology of extinct species, especially those without a clear modern analogue. However, cutting-edge techniques for characterizing specimens digitally and in three dimensions, coupled with state-of-the-art computer models, now provide a robust framework for testing functional and ecological hypotheses even in problematic fossil taxa. One such approach is computational fluid dynamics (CFD), a method for simulating fluid flows around objects that has primarily been applied to complex engineering-design problems. Here, I will present three case studies of CFD applied to fossil taxa, spanning a range of specimen sizes, taxonomic groups and geological ages. First, I will show how CFD enabled a rigorous test of hypothesized feeding modes in an enigmatic Ediacaran organism with three-fold symmetry, revealing previously unappreciated complexity of pre-Cambrian ecosystems. Second, I will show how CFD was used to evaluate hydrodynamic performance and feeding in Cambrian stem-group echinoderms, shedding light on the probable feeding strategy of the latest common ancestor of all deuterostomes. Third, I will show how CFD allowed us to explore the link between form and function in Mesozoic ichthyosaurs. These case studies serve to demonstrate the enormous potential of CFD for addressing long-standing hypotheses for a variety of fossil taxa, opening up an exciting new avenue in palaeontological studies of functional morphology.
Quantitative computed tomography assessment of lung structure and function in pulmonary emphysema.
Madani, A; Keyzer, C; Gevenois, P A
2001-10-01
Accurate diagnosis and quantification of pulmonary emphysema during life is important to understand the natural history of the disease, to assess the extent of the disease, and to evaluate and follow-up therapeutic interventions. Since pulmonary emphysema is defined through pathological criteria, new methods of diagnosis and quantification should be validated by comparisons against histological references. Recent studies have addressed the capability of computed tomography (CT) to quantify pulmonary emphysema accurately. The studies reviewed in this article have been based on CT scans obtained after deep inspiration or expiration, on subjective visual grading and on objective measurements of attenuation values. Especially dedicated software was used for this purpose, which provided numerical data, on both two- and three-dimensional approaches, and compared CT data with pulmonary function tests. More recently, fractal and textural analyses were applied to computed tomography scans to assess the presence, the extent, and the types of emphysema. Quantitative computed tomography has already been used in patient selection for surgical treatment of pulmonary emphysema and in pharmacotherapeutical trials. However, despite numerous and extensive studies, this technique has not yet been standardized and important questions about how best to use computed tomography for the quantification of pulmonary emphysema are still unsolved.
Stable computations with flat radial basis functions using vector-valued rational approximations
NASA Astrophysics Data System (ADS)
Wright, Grady B.; Fornberg, Bengt
2017-02-01
One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.
Introducing ONETEP: linear-scaling density functional simulations on parallel computers.
Skylaris, Chris-Kriton; Haynes, Peter D; Mostofi, Arash A; Payne, Mike C
2005-02-22
We present ONETEP (order-N electronic total energy package), a density functional program for parallel computers whose computational cost scales linearly with the number of atoms and the number of processors. ONETEP is based on our reformulation of the plane wave pseudopotential method which exploits the electronic localization that is inherent in systems with a nonvanishing band gap. We summarize the theoretical developments that enable the direct optimization of strictly localized quantities expressed in terms of a delocalized plane wave basis. These same localized quantities lead us to a physical way of dividing the computational effort among many processors to allow calculations to be performed efficiently on parallel supercomputers. We show with examples that ONETEP achieves excellent speedups with increasing numbers of processors and confirm that the time taken by ONETEP as a function of increasing number of atoms for a given number of processors is indeed linear. What distinguishes our approach is that the localization is achieved in a controlled and mathematically consistent manner so that ONETEP obtains the same accuracy as conventional cubic-scaling plane wave approaches and offers fast and stable convergence. We expect that calculations with ONETEP have the potential to provide quantitative theoretical predictions for problems involving thousands of atoms such as those often encountered in nanoscience and biophysics.
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2002-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
CAP: A Computer Code for Generating Tabular Thermodynamic Functions from NASA Lewis Coefficients
NASA Technical Reports Server (NTRS)
Zehe, Michael J.; Gordon, Sanford; McBride, Bonnie J.
2001-01-01
For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.
How to Compute the Fukui Matrix and Function for Systems with (Quasi-)Degenerate States.
Bultinck, Patrick; Cardenas, Carlos; Fuentealba, Patricio; Johnson, Paul A; Ayers, Paul W
2014-01-14
A system in a spatially (quasi-)degenerate ground state responds in a qualitatively different way to a change in the external potential. Consequently, the usual method for computing the Fukui function, namely, taking the difference between the electron densities of the N- and N ± 1 electron systems, cannot be applied directly. It is shown how the Fukui matrix, and thus also the Fukui function, depends on the nature of the perturbation. One thus needs to use degenerate perturbation theory for the given perturbing potential to generate the density matrix whose change with respect to a change in the number of electrons equals the Fukui matrix. Accounting for the degeneracy in the case of nitrous oxide reveals that an average over the degenerate states differs significantly from using the proper density matrix. We further show the differences in Fukui functions depending on whether a Dirac delta perturbation is used or an interaction with a true point charge (leading to the Fukui potential).
Guido, Ciro A. Cortona, Pietro; Adamo, Carlo
2014-03-14
We extend our previous definition of the metric Δr for electronic excitations in the framework of the time-dependent density functional theory [C. A. Guido, P. Cortona, B. Mennucci, and C. Adamo, J. Chem. Theory Comput. 9, 3118 (2013)], by including a measure of the difference of electronic position variances in passing from occupied to virtual orbitals. This new definition, called Γ, permits applications in those situations where the Δr-index is not helpful: transitions in centrosymmetric systems and Rydberg excitations. The Γ-metric is then extended by using the Natural Transition Orbitals, thus providing an intuitive picture of how locally the electron density changes during the electronic transitions. Furthermore, the Γ values give insight about the functional performances in reproducing different type of transitions, and allow one to define a “confidence radius” for GGA and hybrid functionals.
NASA Astrophysics Data System (ADS)
Gusev, M. I.
2016-10-01
We study the penalty function type methods for computing the reachable sets of nonlinear control systems with state constraints. The state constraints are given by a finite system of smooth inequalities. The proposed methods are based on removing the state constraints by replacing the original system with an auxiliary system without constraints. This auxiliary system is obtained by modifying the set of velocities of the original system around the boundary of constraints. The right-hand side of the system depends on a penalty parameter. We prove that the reachable sets of the auxiliary system approximate in the Hausdorff metric the reachable set of the original system with state constraints as the penalty parameter tends to zero (infinity) and give the estimates of the rate of convergence. The numerical algorithms for computing the reachable sets, based on Pontryagin's maximum principle, are also considered.
NASA Astrophysics Data System (ADS)
Rajavel, A.; Aditya Prasad, A.; Jeyakumar, T.
2017-02-01
The structural features of conformational isomerism in 4-isopropylbenzylidine thiophene-2-carbohydrazide (ITC) polymorphs have been investigated to conquer distinguishable strong Nsbnd H⋯O and weak Csbnd H⋯S hydrogen bond interactions. The single crystals were grown at constant temperature and have characterized by density functional theory computations using B3LYP method by 3-21G basis set. The conformational isomers of ITC were compared and spectroscopically characterized by FT-IR and Raman spectroscopy. The bulk phases were studied by the powder X-ray diffraction patterns. External morphology of ITC was discussed using scanning electron microscopic and transmission electron microscopic studies. Comparisons between various types of intermolecular interactions in the two polymorphic forms have been quantified via Fingerprint and Hirshfeld surface analysis. DFT computations were used to illustrate molecular electrostatic potential, HOMO-LUMO, mulliken atomic charges and electron density of states.
ERIC Educational Resources Information Center
Zahner, William; Moschkovich, Judit
2010-01-01
Students often voice computations during group discussions of mathematics problems. Yet, this type of private speech has received little attention from mathematics educators or researchers. In this article, we use excerpts from middle school students' group mathematical discussions to illustrate and describe "computational private…
NASA Astrophysics Data System (ADS)
Lei, Weiwei; Li, Kai
2016-12-01
There are four recursive algorithms used in the computation of the fully normalized associated Legendre functions (FNALFs): the standard forward column algorithm, the standard forward row algorithm, the recursive algorithm between every other degree, and the Belikov algorithm. These algorithms were evaluated in terms of their first relative numerical accuracy, second relative numerical accuracy, and computation speed and efficiency. The results show that when the degree n reaches 3000, both the recursive algorithm between every other degree and the Belikov algorithm are applicable for | cos θ | ∈[0, 1], with the latter better second relative numerical accuracy than the former at a slower computation speed. In terms of | cos θ | ∈[0, 1], the standard forward column algorithm, the recursive algorithm between every other degree, and the Belikov algorithm are applicable within degree n of 1900, and the standard forward column algorithm has the highest computation speed. The standard forward column algorithm is applicable for | cos θ | ∈[0, 1] within degree n of 1900. This algorithm's range of applicability decreases as the degree increases beyond 1900; however, it remains applicable within a minute range when | cos θ | is approximately equal to 1. The standard forward row algorithm has the smallest range of applicability: it is only applicable within degree n of 100 for | cos θ | ∈[0, 1], and its range of applicability decreases rapidly when the degree is greater than 100. The results of this research are expected to be useful to researchers in choosing the best algorithms for use in the computation of the FNALFs.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507
Zhan, Qiqin; Chen, Xiaojun
2016-01-01
This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets' implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module.
Boolean Combinations of Implicit Functions for Model Clipping in Computer-Assisted Surgical Planning
2016-01-01
This paper proposes an interactive method of model clipping for computer-assisted surgical planning. The model is separated by a data filter that is defined by the implicit function of the clipping path. Being interactive to surgeons, the clipping path that is composed of the plane widgets can be manually repositioned along the desirable presurgical path, which means that surgeons can produce any accurate shape of the clipped model. The implicit function is acquired through a recursive algorithm based on the Boolean combinations (including Boolean union and Boolean intersection) of a series of plane widgets’ implicit functions. The algorithm is evaluated as highly efficient because the best time performance of the algorithm is linear, which applies to most of the cases in the computer-assisted surgical planning. Based on the above stated algorithm, a user-friendly module named SmartModelClip is developed on the basis of Slicer platform and VTK. A number of arbitrary clipping paths have been tested. Experimental results of presurgical planning for three types of Le Fort fractures and for tumor removal demonstrate the high reliability and efficiency of our recursive algorithm and robustness of the module. PMID:26751685
Saier, M H
1994-01-01
Three-dimensional structures have been elucidated for very few integral membrane proteins. Computer methods can be used as guides for estimation of solute transport protein structure, function, biogenesis, and evolution. In this paper the application of currently available computer programs to over a dozen distinct families of transport proteins is reviewed. The reliability of sequence-based topological and localization analyses and the importance of sequence and residue conservation to structure and function are evaluated. Evidence concerning the nature and frequency of occurrence of domain shuffling, splicing, fusion, deletion, and duplication during evolution of specific transport protein families is also evaluated. Channel proteins are proposed to be functionally related to carriers. It is argued that energy coupling to transport was a late occurrence, superimposed on preexisting mechanisms of solute facilitation. It is shown that several transport protein families have evolved independently of each other, employing different routes, at different times in evolutionary history, to give topologically similar transmembrane protein complexes. The possible significance of this apparent topological convergence is discussed. PMID:8177172
Computing single step operators of logic programming in radial basis function neural networks
NASA Astrophysics Data System (ADS)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-10
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.
NASA Astrophysics Data System (ADS)
Amin, Ahmed
1986-07-01
A computer-controlled system for measuring bulk resistivity of insulating solids as a function of temperature is described. The measuring circuit is a modification of that given in the ASTM standard D257-66, to allow for a number of operations during the data-acquisition cycle. The bulk resistivity of an acceptor-doped morphotropic lead zirconate-titanate piezoelectric composition has been measured over the temperature range +40 to +200 °C. The activation energy derived from the experimental data is compared to the published values of similar morphotropic compositions.
Brown, James Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
NASA Astrophysics Data System (ADS)
Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing
2016-10-01
We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.
Monte Carlo Computation of the Finite-Size Scaling Function: an Alternative Approach
NASA Astrophysics Data System (ADS)
Kim, Jae-Kwon; de Souza, Adauto J. F.; Landau, D. P.
1996-03-01
We show how to compute numerically a finite-size-scaling function which is particularly effective in extracting accurate infinite- volume -limit values (bulk values) of certain physical quantities^1. We illustrate our procedure for the two and three dimensional Ising models, and report our bulk values for the correlation lenth, magnetic susceptibility, and renormalized four-point coupling constant. Based on these bulk values we extract the values of various critical parameters. ^1 J.-K. Kim, Euro. Phys. Lett. 28, 211 (1994) Research supported in part by the NSF ^Permanent address: Departmento de Fisica e Matematica, Universidade Federal Rural de Pernambuco, 52171-900, Recife, Pernambuco, Brazil
NASA Astrophysics Data System (ADS)
Barnwell, Richard W.
1993-01-01
The derivation of the accurate, second-order, almost linear, approximate equation governing the defect stream function for nonequilibrium compressible turbulent boundary layers is reviewed. The similarity of this equation to the heat conduction equation is exploited in the development of an unconditionally stable, tridiagonal computational method which is second-order accurate in the marching direction and fourth-order accurate in the surface-normal direction. Results compare well with experimental data. Nonlinear effects are shown to be small. This two-dimensional method is simple and has been implemented on a programmable calculator.
San José Estépar, Raúl; Mendoza, Carlos S.; Hersh, Craig P.; Laird, Nan; Crapo, James D.; Lynch, David A.; Silverman, Edwin K.; Washko, George R.
2013-01-01
Rationale: Emphysema occurs in distinct pathologic patterns, but little is known about the epidemiologic associations of these patterns. Standard quantitative measures of emphysema from computed tomography (CT) do not distinguish between distinct patterns of parenchymal destruction. Objectives: To study the epidemiologic associations of distinct emphysema patterns with measures of lung-related physiology, function, and health care use in smokers. Methods: Using a local histogram-based assessment of lung density, we quantified distinct patterns of low attenuation in 9,313 smokers in the COPDGene Study. To determine if such patterns provide novel insights into chronic obstructive pulmonary disease epidemiology, we tested for their association with measures of physiology, function, and health care use. Measurements and Main Results: Compared with percentage of low-attenuation area less than −950 Hounsfield units (%LAA-950), local histogram-based measures of distinct CT low-attenuation patterns are more predictive of measures of lung function, dyspnea, quality of life, and health care use. These patterns are strongly associated with a wide array of measures of respiratory physiology and function, and most of these associations remain highly significant (P < 0.005) after adjusting for %LAA-950. In smokers without evidence of chronic obstructive pulmonary disease, the mild centrilobular disease pattern is associated with lower FEV1 and worse functional status (P < 0.005). Conclusions: Measures of distinct CT emphysema patterns provide novel information about the relationship between emphysema and key measures of physiology, physical function, and health care use. Measures of mild emphysema in smokers with preserved lung function can be extracted from CT scans and are significantly associated with functional measures. PMID:23980521
Mandonnet, Emmanuel; Duffau, Hugues
2014-01-01
Historically, cerebral processing has been conceptualized as a framework based on statically localized functions. However, a growing amount of evidence supports a hodotopical (delocalized) and flexible organization. A number of studies have reported absence of a permanent neurological deficit after massive surgical resections of eloquent brain tissue. These results highlight the tremendous plastic potential of the brain. Understanding anatomo-functional correlates underlying this cerebral reorganization is a prerequisite to restore brain functions through brain-computer interfaces (BCIs) in patients with cerebral diseases, or even to potentiate brain functions in healthy individuals. Here, we review current knowledge of neural networks that could be utilized in the BCIs that enable movements and language. To this end, intraoperative electrical stimulation in awake patients provides valuable information on the cerebral functional maps, their connectomics and plasticity. Overall, these studies indicate that the complex cerebral circuitry that underpins interactions between action, cognition and behavior should be throughly investigated before progress in BCI approaches can be achieved. PMID:24834030
Functional Priorities, Assistive Technology, and Brain-Computer Interfaces after Spinal Cord Injury
Collinger, Jennifer L.; Boninger, Michael L.; Bruns, Tim M.; Curley, Kenneth; Wang, Wei; Weber, Douglas J.
2012-01-01
Spinal cord injury often impacts a person’s ability to perform critical activities of daily living and can have a negative impact on their quality of life. Assistive technology aims to bridge this gap to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies, like brain-computer interfaces (BCIs), are developed. In a survey study of fifty-seven veterans with spinal cord injury who were participating in the National Veterans Wheelchair Games, we found that restoration of bladder/bowel control, walking, and arm/hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of individuals in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would be willing to consider surgery to implant a BCI even though non-invasiveness was a high priority design requirement. This survey demonstrates the interest of individuals with spinal cord injury in receiving and contributing to the design of BCI. PMID:23760996
Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury.
Collinger, Jennifer L; Boninger, Michael L; Bruns, Tim M; Curley, Kenneth; Wang, Wei; Weber, Douglas J
2013-01-01
Spinal cord injury (SCI) often affects a person's ability to perform critical activities of daily living and can negatively affect his or her quality of life. Assistive technology aims to bridge this gap in order to augment function and increase independence. It is critical to involve consumers in the design and evaluation process as new technologies such as brain-computer interfaces (BCIs) are developed. In a survey study of 57 veterans with SCI participating in the 2010 National Veterans Wheelchair Games, we found that restoration of bladder and bowel control, walking, and arm and hand function (tetraplegia only) were all high priorities for improving quality of life. Many of the participants had not used or heard of some currently available technologies designed to improve function or the ability to interact with their environment. The majority of participants in this study were interested in using a BCI, particularly for controlling functional electrical stimulation to restore lost function. Independent operation was considered to be the most important design criteria. Interestingly, many participants reported that they would consider surgery to implant a BCI even though noninvasiveness was a high-priority design requirement. This survey demonstrates the interest of individuals with SCI in receiving and contributing to the design of BCIs.
Roberts, Timothy D; Clatworthy, Mark G; Frampton, Chris M; Young, Simon W
2015-09-01
The objective of this study was to determine whether computer assisted navigation in total knee arthroplasty (TKA) improves functional outcomes and implant survivability using data from a large national database. We analysed 9054 primary TKA procedures performed between 2006 and 2012 from the New Zealand National Joint Registry. Functional outcomes were assessed using Oxford Knee Questionnaires at six months and five years. On multivariate analysis, there was no significant difference in mean Oxford Knee Scores between the navigated and non-navigated groups at six months (39.0 vs 38.1, P=0.54) or five years (42.2 vs 42.0, P=0.76). At current follow-up, there was no difference in revision rates between navigated and non-navigated TKA (0.46 vs 0.43 revisions 100 component years, P=0.8).
Purdy, Michael D; Bennett, Brad C; McIntire, William E; Khan, Ali K; Kasson, Peter M; Yeager, Mark
2014-08-01
Three vignettes exemplify the potential of combining EM and X-ray crystallographic data with molecular dynamics (MD) simulation to explore the architecture, dynamics and functional properties of multicomponent, macromolecular complexes. The first two describe how EM and X-ray crystallography were used to solve structures of the ribosome and the Arp2/3-actin complex, which enabled MD simulations that elucidated functional dynamics. The third describes how EM, X-ray crystallography, and microsecond MD simulations of a GPCR:G protein complex were used to explore transmembrane signaling by the β-adrenergic receptor. Recent technical advancements in EM, X-ray crystallography and computational simulation create unprecedented synergies for integrative structural biology to reveal new insights into heretofore intractable biological systems.
NASA Technical Reports Server (NTRS)
1975-01-01
A system analysis of the shuttle orbiter baseline system management (SM) computer function is performed. This analysis results in an alternative SM design which is also described. The alternative design exhibits several improvements over the baseline, some of which are increased crew usability, improved flexibility, and improved growth potential. The analysis consists of two parts: an application assessment and an implementation assessment. The former is concerned with the SM user needs and design functional aspects. The latter is concerned with design flexibility, reliability, growth potential, and technical risk. The system analysis is supported by several topical investigations. These include: treatment of false alarms, treatment of off-line items, significant interface parameters, and a design evaluation checklist. An in-depth formulation of techniques, concepts, and guidelines for design of automated performance verification is discussed.
Carrizo, Sebastián; Xie, Xinzhou; Peinado-Peinado, Rafael; Sánchez-Recalde, Angel; Jiménez-Valero, Santiago; Galeote-Garcia, Guillermo; Moreno, Raúl
2014-10-01
Clinical trials have shown that functional assessment of coronary stenosis by fractional flow reserve (FFR) improves clinical outcomes. Intravascular ultrasound (IVUS) complements conventional angiography, and is a powerful tool to assess atherosclerotic plaques and to guide percutaneous coronary intervention (PCI). Computational fluid dynamics (CFD) simulation represents a novel method for the functional assessment of coronary flow. A CFD simulation can be calculated from the data normally acquired by IVUS images. A case of coronary heart disease studied with FFR and IVUS, before and after PCI, is presented. A three-dimensional model was constructed based on IVUS images, to which CFD was applied. A discussion of the literature concerning the clinical utility of CFD simulation is provided.
Integrative computed tomographic imaging of cardiac structure, function, perfusion, and viability.
Thilo, Christian; Hanley, Michael; Bastarrika, Gorka; Ruzsics, Balazs; Schoepf, U Joseph
2010-01-01
Recent advances in multidetector-row computed tomography (MDCT) technology have created new opportunities in cardiac imaging and provided new insights into a variety of disease states. Use of 64-slice coronary computed tomography angiography has been validated for the evaluation of clinically relevant coronary artery stenosis with high negative predictive values for ruling out significant obstructive disease. This technology has also advanced the care of patients with acute chest pain by simultaneous assessment of acute coronary syndrome, pulmonary embolism, and acute aortic syndrome ("triple rule out"). Although MDCT has been instrumental in the advancement of cardiac imaging, there are still limitations in patients with high or irregular heart rates. Newer MDCT scanner generations hold promise to improve some of these limitations for noninvasive cardiac imaging. The evaluation of coronary artery stenosis remains the primary clinical indication for cardiac computed tomography angiography. However, the use of MDCT for simultaneous assessment of coronary artery stenosis, atherosclerotic plaque formation, ventricular function, myocardial perfusion, and viability with a single modality is under intense investigation. Recent technical developments hold promise for accomplishing this goal and establishing MDCT as a comprehensive stand-alone test for integrative imaging of coronary heart disease.
Wang, Hongbo; Shu, Shengjie; Li, Jinping; Jiang, Huijie
2016-02-01
The objective of this study was to observe the change in blood perfusion of liver cancer following argon-helium knife treatment with functional computer tomography perfusion imaging. Twenty-seven patients with primary liver cancer treated with argon-helium knife and were included in this study. Plain computer tomography (CT) and computer tomography perfusion (CTP) imaging were conducted in all patients before and after treatment. Perfusion parameters including blood flows, blood volume, hepatic artery perfusion fraction, hepatic artery perfusion, and hepatic portal venous perfusion were used for evaluating therapeutic effect. All parameters in liver cancer were significantly decreased after argon-helium knife treatment (p < 0.05 to all). Significant decrease in hepatic artery perfusion was also observed in pericancerous liver tissue, but other parameters kept constant. CT perfusion imaging is able to detect decrease in blood perfusion of liver cancer post-argon-helium knife therapy. Therefore, CTP imaging would play an important role for liver cancer management followed argon-helium knife therapy.
Davies, Sherri R.; Chang, Li-Wei; Patra, Debabrata; Xing, Xiaoyun; Posey, Karen; Hecht, Jacqueline; Stormo, Gary D.; Sandell, Linda J.
2007-01-01
Chondrocyte gene regulation is important for the generation and maintenance of cartilage tissues. Several regulatory factors have been identified that play a role in chondrogenesis, including the positive transacting factors of the SOX family such as SOX9, SOX5, and SOX6, as well as negative transacting factors such as C/EBP and delta EF1. However, a complete understanding of the intricate regulatory network that governs the tissue-specific expression of cartilage genes is not yet available. We have taken a computational approach to identify cis-regulatory, transcription factor (TF) binding motifs in a set of cartilage characteristic genes to better define the transcriptional regulatory networks that regulate chondrogenesis. Our computational methods have identified several TFs, whose binding profiles are available in the TRANSFAC database, as important to chondrogenesis. In addition, a cartilage-specific SOX-binding profile was constructed and used to identify both known, and novel, functional paired SOX-binding motifs in chondrocyte genes. Using DNA pattern-recognition algorithms, we have also identified cis-regulatory elements for unknown TFs. We have validated our computational predictions through mutational analyses in cell transfection experiments. One novel regulatory motif, N1, found at high frequency in the COL2A1 promoter, was found to bind to chondrocyte nuclear proteins. Mutational analyses suggest that this motif binds a repressive factor that regulates basal levels of the COL2A1 promoter. PMID:17785538
Point spread function computation in normal incidence for rough optical surfaces
NASA Astrophysics Data System (ADS)
Tayabaly, Kashmira; Spiga, Daniele; Sironi, Giorgia; Canestrari, Rodolfo; Lavagna, Michele; Pareschi, Giovanni
2016-08-01
The Point Spread Function (PSF) allows for specifying the angular resolution of optical systems which is a key parameter used to define the performances of most optics. A prediction of the system's PSF is therefore a powerful tool to assess the design and manufacture requirements of complex optical systems. Currently, well-established ray-tracing routines based on a geometrical optics are used for this purpose. However, those ray-tracing routines either lack real surface defect considerations (figure errors or micro-roughness) in their computation, or they include a scattering effect modeled separately that requires assumptions difficult to verify. Since there is an increasing demand for tighter angular resolution, the problem of surface finishing could drastically damage the optical performances of a system, including optical telescopes systems. A purely physical optics approach is more effective as it remains valid regardless of the shape and size of the defects appearing on the optical surface. However, a computation when performed in the two-dimensional space is time consuming since it requires processing a surface map with a few micron resolution which sometimes extends the propagation to multiple-reflections. The computation is significantly simplified in the far-field configuration as it involves only a sequence of Fourier Transforms. We show how to account for measured surface defects and roughness in order to predict the performances of the optics in single reflection, which can be applied and validated for real case studies.
Terrell, Cassidy R; Listenberger, Laura L
2017-02-01
Recognizing that undergraduate students can benefit from analysis of 3D protein structure and function, we have developed a multiweek, inquiry-based molecular visualization project for Biochemistry I students. This project uses a virtual model of cyclooxygenase-1 (COX-1) to guide students through multiple levels of protein structure analysis. The first assignment explores primary structure by generating and examining a protein sequence alignment. Subsequent assignments introduce 3D visualization software to explore secondary, tertiary, and quaternary structure. Students design an inhibitor, based on scrutiny of the enzyme active site, and evaluate the fit of the molecule using computed binding energies. In the last assignment, students introduce a point mutation to model the active site of the related COX-2 enzyme and analyze the impact of the mutation on inhibitor binding. With this project we aim to increase knowledge about, and confidence in using, online databases and computational tools. Here, we share results of our mixed methods pre- and postsurvey demonstrating student gains in knowledge about, and confidence using, online databases and computational tools. © 2017 by The International Union of Biochemistry and Molecular Biology, 2017.
Do, An H; Wang, Po T; King, Christine E; Schombs, Andrew; Cramer, Steven C; Nenadic, Zoran
2012-01-01
Gait impairment due to foot drop is a common outcome of stroke, and current physiotherapy provides only limited restoration of gait function. Gait function can also be aided by orthoses, but these devices may be cumbersome and their benefits disappear upon removal. Hence, new neuro-rehabilitative therapies are being sought to generate permanent improvements in motor function beyond those of conventional physiotherapies through positive neural plasticity processes. Here, the authors describe an electroencephalogram (EEG) based brain-computer interface (BCI) controlled functional electrical stimulation (FES) system that enabled a stroke subject with foot drop to re-establish foot dorsiflexion. To this end, a prediction model was generated from EEG data collected as the subject alternated between periods of idling and attempted foot dorsiflexion. This prediction model was then used to classify online EEG data into either "idling" or "dorsiflexion" states, and this information was subsequently used to control an FES device to elicit effective foot dorsiflexion. The performance of the system was assessed in online sessions, where the subject was prompted by a computer to alternate between periods of idling and dorsiflexion. The subject demonstrated purposeful operation of the BCI-FES system, with an average cross-correlation between instructional cues and BCI-FES response of 0.60 over 3 sessions. In addition, analysis of the prediction model indicated that non-classical brain areas were activated in the process, suggesting post-stroke cortical re-organization. In the future, these systems may be explored as a potential therapeutic tool that can help promote positive plasticity and neural repair in chronic stroke patients.
Wang, Menghua
2016-05-30
To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude
NASA Astrophysics Data System (ADS)
Avanaki, Mohammad R. N.; Xia, Jun; Wang, Lihong V.
2013-03-01
Photoacoustic computed tomography (PACT) is an emerging imaging technique which is based on the acoustic detection of optical absorption from tissue chromophores, such as oxy-hemoglobin and deoxy-hemoglobin. An important application of PACT is functional brain imaging of small animals. The conversion of light to acoustic waves allows PACT to provide high resolution images of cortical vasculatures through the intact scalp. Here, PACT was utilized to study the activated areas of the mouse brain during forepaw and hindpaw stimulations. Temporal PACT images were acquired enabling computation of hemodynamic changes during stimulation. The stimulations were performed by trains of pulses at different stimulation currents (between 0.1 to 2 mA) and pulse repetition rates (between 0.05 Hz to 0.01Hz). The response at somatosensory cortex-forelimb, and somatosensory cortex-hindlimb, were investigated. The Paxinos mouse brain atlas was used to confirm the activated regions. The study shows that PACT is a promising new technology that can be used to study brain functionality with high spatial resolution.
Song, Inyoung; Park, Jung Ah; Choi, Bo Hwa; Shin, Je Kyoun; Chee, Hyun Keun; Kim, Jun Seok
2016-01-01
Objective The aim of this study was to identify the morphological and functional characteristics of quadricuspid aortic valves (QAV) on cardiac computed tomography (CCT). Materials and Methods We retrospectively enrolled 11 patients with QAV. All patients underwent CCT and transthoracic echocardiography (TTE), and 7 patients underwent cardiovascular magnetic resonance (CMR). The presence and classification of QAV assessed by CCT was compared with that of TTE and intraoperative findings. The regurgitant orifice area (ROA) measured by CCT was compared with severity of aortic regurgitation (AR) by TTE and the regurgitant fraction (RF) by CMR. Results All of the patients had AR; 9 had pure AR, 1 had combined aortic stenosis and regurgitation, and 1 had combined subaortic stenosis and regurgitation. Two patients had a subaortic fibrotic membrane and 1 of them showed a subaortic stenosis. One QAV was misdiagnosed as tricuspid aortic valve on TTE. In accordance with the Hurwitz and Robert's classification, consensus was reached on the QAV classification between the CCT and TTE findings in 7 of 10 patients. The patients were classified as type A (n = 1), type B (n = 3), type C (n = 1), type D (n = 4), and type F (n = 2) on CCT. A very high correlation existed between ROA by CCT and RF by CMR (r = 0.99) but a good correlation existed between ROA by CCT and regurgitant severity by TTE (r = 0.62). Conclusion Cardiac computed tomography provides comprehensive anatomical and functional information about the QAV. PMID:27390538
Highly automated computer-aided diagnosis of neurological disorders using functional brain imaging
NASA Astrophysics Data System (ADS)
Spetsieris, P. G.; Ma, Y.; Dhawan, V.; Moeller, J. R.; Eidelberg, D.
2006-03-01
We have implemented a highly automated analytical method for computer aided diagnosis (CAD) of neurological disorders using functional brain imaging that is based on the Scaled Subprofile Model (SSM). Accurate diagnosis of functional brain disorders such as Parkinson's disease is often difficult clinically, particularly in early stages. Using principal component analysis (PCA) in conjunction with SSM on brain images of patients and normals, we can identify characteristic abnormal network covariance patterns which provide a subject dependent scalar score that not only discriminates a particular disease but also correlates with independent measures of disease severity. These patterns represent disease-specific brain networks that have been shown to be highly reproducible in distinct groups of patients. Topographic Profile Rating (TPR) is a reverse SSM computational algorithm that can be used to determine subject scores for new patients on a prospective basis. In our implementation, reference values for a full range of patients and controls are automatically accessed for comparison. We also implemented an automated recalibration step to produce reference scores for images generated in a different imaging environment from that used in the initial network derivation. New subjects under the same setting can then be evaluated individually and a simple report is generated indicating the subject's classification. For scores near the normal limits, additional criteria are used to make a definitive diagnosis. With further refinement, automated TPR can be used to efficiently assess disease severity, monitor disease progression and evaluate treatment efficacy.
Overstall, Antony M; Woods, David C
2016-08-01
We present a common framework for Bayesian emulation methodologies for multivariate output simulators, or computer models, that employ either parametric linear models or non-parametric Gaussian processes. Novel diagnostics suitable for multivariate covariance separable emulators are developed and techniques to improve the adequacy of an emulator are discussed and implemented. A variety of emulators are compared for a humanitarian relief simulator, modelling aid missions to Sicily after a volcanic eruption and earthquake, and a sensitivity analysis is conducted to determine the sensitivity of the simulator output to changes in the input variables. The results from parametric and non-parametric emulators are compared in terms of prediction accuracy, uncertainty quantification and scientific interpretability.
ERIC Educational Resources Information Center
Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted
2014-01-01
An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…
ERIC Educational Resources Information Center
International Business Machines Corp., White Plains, NY.
The economic and technical feasibility of providing a remote terminal central computing facility to serve a group of 25-75 secondary schools and colleges was investigated. The general functions of a central facility for an educational cluster were defined to include training in computer techniques, the solution of student development problems in…
NASA Astrophysics Data System (ADS)
Mishev, Alexander; Usoskin, Ilya
2016-07-01
A precise analysis of SEP (solar energetic particle) spectral and angular characteristics using neutron monitor (NM) data requires realistic modeling of propagation of those particles in the Earth's magnetosphere and atmosphere. On the basis of the method including a sequence of consecutive steps, namely a detailed computation of the SEP assymptotic cones of acceptance, and application of a neutron monitor yield function and convenient optimization procedure, we derived the rigidity spectra and anisotropy characteristics of several major GLEs. Here we present several major GLEs of the solar cycle 23: the Bastille day event on 14 July 2000 (GLE 59), GLE 69 on 20 January 2005, and GLE 70 on 13 December 2006. The SEP spectra and pitch angle distributions were computed in their dynamical development. For the computation we use the newly computed yield function of the standard 6NM64 neutron monitor for primary proton and alpha CR nuclei. In addition, we present new computations of NM yield function for the altitudes of 3000 m and 5000 m above the sea level The computations were carried out with Planetocosmics and CORSIKA codes as standardized Monte-Carlo tools for atmospheric cascade simulations. The flux of secondary neutrons and protons was computed using the Planetocosmics code appliyng a realistic curved atmospheric. Updated information concerning the NM registration efficiency for secondary neutrons and protons was used. The derived results for spectral and angular characteristics using the newly computed NM yield function at several altitudes are compared with the previously obtained ones using the double attenuation method.
Corda, Marcella; Tamburrini, Maurizio; De Rosa, Maria C; Sanna, Maria T; Fais, Antonella; Olianas, Alessandra; Pellegrini, Mariagiuseppina; Giardina, Bruno; di Prisco, Guido
2003-01-01
The functional properties of haemoglobin from the Mediterranean whale Balaenoptera physalus have been studied as functions of heterotropic effector concentration and temperature. Particular attention has been given to the effect of carbon dioxide and lactate since the animal is specialised for prolonged dives often in cold water. The molecular basis of the functional behaviour and in particular of the weak interaction with 2,3-diphosphoglycerate is discussed in the light of the primary structure and of computer modelling. On these bases, it is suggested that the A2 (Pro-->Ala) substitution observed in the beta chains of whale haemoglobin may be responsible for the displacement of the A helix known to be a key structural feature in haemoglobins that display an altered interaction with 2,3-diphosphoglycerate as compared with human haemoglobin. The functional and structural results, discussed in the light of a previous study on the haemoglobin from the Arctic whale Balaenoptera acutorostrata, give further insights into the regulatory mechanisms of the interactive effects of temperature, carbon dioxide and lactate.
Not Available
2012-07-01
NREL researchers use high-performance computing to demonstrate fundamental roles of aromatic residues in cellulase enzyme tunnels. National Renewable Energy Laboratory (NREL) computer simulations of a key industrial enzyme, the Trichoderma reesei Family 6 cellulase (Cel6A), predict that aromatic residues near the enzyme's active site and at the entrance and exit tunnel perform different functions in substrate binding and catalysis, depending on their location in the enzyme. These results suggest that nature employs aromatic-carbohydrate interactions with a wide variety of binding affinities for diverse functions. Outcomes also suggest that protein engineering strategies in which mutations are made around the binding sites may require tailoring specific to the enzyme family. Cellulase enzymes ubiquitously exhibit tunnels or clefts lined with aromatic residues for processing carbohydrate polymers to monomers, but the molecular-level role of these aromatic residues remains unknown. In silico mutation of the aromatic residues near the catalytic site of Cel6A has little impact on the binding affinity, but simulation suggests that these residues play a major role in the glucopyranose ring distortion necessary for cleaving glycosidic bonds to produce fermentable sugars. Removal of aromatic residues at the entrance and exit of the cellulase tunnel, however, dramatically impacts the binding affinity. This suggests that these residues play a role in acquiring cellulose chains from the cellulose crystal and stabilizing the reaction product, respectively. These results illustrate that the role of aromatic-carbohydrate interactions varies dramatically depending on the position in the enzyme tunnel. As aromatic-carbohydrate interactions are present in all carbohydrate-active enzymes, the results have implications for understanding protein structure-function relationships in carbohydrate metabolism and recognition, carbon turnover in nature, and protein engineering strategies for
Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; Tucker, Lyndsay; Correia, Bruna; Do-Thanh, Chi-Linh; Dai, Sheng; Hancock, Robert D.; Bryantsev, Vyacheslav S.
2015-01-26
Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pK_{a} values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pK_{a} values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopic titrations to measure the pK_{a} values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pK_{a} values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pK_{a} units and 0.35 pK_{a} units, respectively, and a root mean square deviation of 0.46 pK_{a} units and 0.45 pK_{a} units, respectively. Finally, we employ our two best methods to predict the pK_{a} values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.
Mehio, Nada; Lashely, Mark A.; Nugent, Joseph W.; ...
2015-01-26
Poly(acrylamidoxime) adsorbents are often invoked in discussions of mining uranium from seawater. It has been demonstrated repeatedly in the literature that the success of these materials is due to the amidoxime functional group. While the amidoxime-uranyl chelation mode has been established, a number of essential binding constants remain unclear. This is largely due to the wide range of conflicting pKa values that have been reported for the amidoxime functional group in the literature. To resolve this existing controversy we investigated the pKa values of the amidoxime functional group using a combination of experimental and computational methods. Experimentally, we used spectroscopicmore » titrations to measure the pKa values of representative amidoximes, acetamidoxime and benzamidoxime. Computationally, we report on the performance of several protocols for predicting the pKa values of aqueous oxoacids. Calculations carried out at the MP2 or M06-2X levels of theory combined with solvent effects calculated using the SMD model provide the best overall performance with a mean absolute error of 0.33 pKa units and 0.35 pKa units, respectively, and a root mean square deviation of 0.46 pKa units and 0.45 pKa units, respectively. Finally, we employ our two best methods to predict the pKa values of promising, uncharacterized amidoxime ligands. Hence, our study provides a convenient means for screening suitable amidoxime monomers for future generations of poly(acrylamidoxime) adsorbents used to mine uranium from seawater.« less
Computer Simulation on the Cooperation of Functional Molecules during the Early Stages of Evolution
Ma, Wentao; Hu, Jiming
2012-01-01
It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with “micro-resolution” at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a “habitat-shift” into bulk water. Second, the protocell stage started with a substage of “pseudo-protocells”, with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent “true-protocell” substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization
Cuny, Jérôme; Sykina, Kateryna; Fontaine, Bruno; Le Pollès, Laurent; Pickard, Chris J; Gautier, Régis
2011-11-21
Solid-state (95)Mo nuclear magnetic resonance (NMR) properties of molybdenum hexacarbonyl have been computed using density functional theory (DFT) based methods. Both quadrupolar coupling and chemical shift parameters were evaluated and compared with parameters of high precision determined using single-crystal (95)Mo NMR experiments. Within a molecular approach, the effects of major computational parameters, i.e. basis set, exchange-correlation functional, treatment of relativity, have been evaluated. Except for the isotropic parameter of both chemical shift and chemical shielding, computed NMR parameters are more sensitive to geometrical variations than computational details. Relativistic effects do not play a crucial part in the calculations of such parameters for the 4d transition metal, in particular isotropic chemical shift. Periodic DFT calculations were tackled to measure the influence of neighbouring molecules on the crystal structure. These effects have to be taken into account to compute accurate solid-state (95)Mo NMR parameters even for such an inorganic molecular compound.
Novel hold-release functionality in a P300 brain-computer interface
NASA Astrophysics Data System (ADS)
Alcaide-Aguirre, R. E.; Huggins, J. E.
2014-12-01
Assistive technology control interface theory describes interface activation and interface deactivation as distinct properties of any control interface. Separating control of activation and deactivation allows precise timing of the duration of the activation. Objective. We propose a novel P300 brain-computer interface (BCI) functionality with separate control of the initial activation and the deactivation (hold-release) of a selection. Approach. Using two different layouts and off-line analysis, we tested the accuracy with which subjects could (1) hold their selection and (2) quickly change between selections. Main results. Mean accuracy across all subjects for the hold-release algorithm was 85% with one hold-release classification and 100% with two hold-release classifications. Using a layout designed to lower perceptual errors, accuracy increased to a mean of 90% and the time subjects could hold a selection was 40% longer than with the standard layout. Hold-release functionality provides improved response time (6-16 times faster) over the initial P300 BCI selection by allowing the BCI to make hold-release decisions from very few flashes instead of after multiple sequences of flashes. Significance. For the BCI user, hold-release functionality allows for faster, more continuous control with a P300 BCI, creating new options for BCI applications.
Deliquescence of NaBH4 computed from density functional theory
NASA Astrophysics Data System (ADS)
Li, Ping; Al-Saidi, Wissam; Johnson, Karl
2012-02-01
Complex hydrides are promising hydrogen storage materials and have received significant attention due to their high hydrogen-capacity. The hydrolysis reaction of NaBH4 releases hydrogen with both fast kinetics and high extent of reaction under technical conditions by using steam deliquescence of NaBH4. This catalyst-free reaction has many advantages over traditional catalytic aqueous phase hydrolysis. The first step in the reaction is deliquescence, i.e. adsorption of water onto NaBH4 surface and then formation of a liquid layer of a concentrated NaBH4 solution, which is quickly followed by hydrogen generation. We have used periodic plane wave density functional theory to compute the energetics and dynamics of the initial stages of deliquescence on the (001) surface of NaBH4. Comparison of results from standard generalized gradient approximation functionals with a dispersion-corrected density functional show that dispersion forces are important for adsorption. We used DFT molecular dynamics to assess the elementary steps in the deliquescence process.
Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study
Allam, Sushmita L.; Bouteiller, Jean-Marie C.; Hu, Eric Y.; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W.
2015-01-01
Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics. PMID:26480028
Synaptic Efficacy as a Function of Ionotropic Receptor Distribution: A Computational Study.
Allam, Sushmita L; Bouteiller, Jean-Marie C; Hu, Eric Y; Ambert, Nicolas; Greget, Renaud; Bischoff, Serge; Baudry, Michel; Berger, Theodore W
2015-01-01
Glutamatergic synapses are the most prevalent functional elements of information processing in the brain. Changes in pre-synaptic activity and in the function of various post-synaptic elements contribute to generate a large variety of synaptic responses. Previous studies have explored postsynaptic factors responsible for regulating synaptic strength variations, but have given far less importance to synaptic geometry, and more specifically to the subcellular distribution of ionotropic receptors. We analyzed the functional effects resulting from changing the subsynaptic localization of ionotropic receptors by using a hippocampal synaptic computational framework. The present study was performed using the EONS (Elementary Objects of the Nervous System) synaptic modeling platform, which was specifically developed to explore the roles of subsynaptic elements as well as their interactions, and that of synaptic geometry. More specifically, we determined the effects of changing the localization of ionotropic receptors relative to the presynaptic glutamate release site, on synaptic efficacy and its variations following single pulse and paired-pulse stimulation protocols. The results indicate that changes in synaptic geometry do have consequences on synaptic efficacy and its dynamics.
Parry, David A D
2016-01-01
Experimental and theoretical research aimed at determining the structure and function of the family of intermediate filament proteins has made significant advances over the past 20 years. Much of this has either contributed to or relied on the amino acid sequence databases that are now available online, and the data mining approaches that have been developed to analyze these sequences. As the quality of sequence data is generally high, it follows that it is the design of the computational and graphical methodologies that are of especial importance to researchers who aspire to gain a greater understanding of those sequence features that specify both function and structural hierarchy. However, these techniques are necessarily subject to limitations and it is important that these be recognized. In addition, no single method is likely to be successful in solving a particular problem, and a coordinated approach using a suite of methods is generally required. A final step in the process involves the interpretation of the results obtained and the construction of a working model or hypothesis that suggests further experimentation. While such methods allow meaningful progress to be made it is still important that the data are interpreted correctly and conservatively. New data mining methods are continually being developed, and it can be expected that even greater understanding of the relationship between structure and function will be gleaned from sequence data in the coming years.
Reproducibility of physiologic parameters obtained using functional computed tomography in mice
NASA Astrophysics Data System (ADS)
Krishnamurthi, Ganapathy; Stantz, Keith M.; Steinmetz, Rosemary; Hutchins, Gary D.; Liang, Yun
2004-04-01
High-speed X-ray computed tomography (CT) has the potential to observe the transport of iodinated radio-opaque contrast agent (CA) through tissue enabling the quantification of tissue physiology in organs and tumors. The concentration of Iodine in the tissue and in the left ventricle is extracted as a function of time and is fit to a compartmental model for physiologic parameter estimation. The reproducibility of the physiologic parameters depend on the (1) The image-sampling rate. According to our simulations 5-second sampling is required for CA injection rates of 1.0ml/min (2) the compartmental model should reflect the real tissue function to give meaning results. In order to verify these limits a functional CT study was carried out in a group of 3 mice. Dynamic CT scans were performed on all the mice with 0.5ml/min, 1ml/min and 2ml/min CA injection rates. The physiologic parameters were extracted using 4 parameter and 6 parameter two compartmental models (2CM). Single factor ANOVA did not indicate a significant difference in the perfusion, in the kidneys for the different injection rates. The physiologic parameter obtained using the 6-parameter 2CM model was in line with literature values and the 6-parameter significantly improves chi-square goodness of fits for two cases.
Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang
2014-01-01
The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.
Training Older Adults to Use Tablet Computers: Does It Enhance Cognitive Function?
Chan, Micaela Y.; Haber, Sara; Drew, Linda M.; Park, Denise C.
2016-01-01
Purpose of the Study: Recent evidence shows that engaging in learning new skills improves episodic memory in older adults. In this study, older adults who were computer novices were trained to use a tablet computer and associated software applications. We hypothesize that sustained engagement in this mentally challenging training would yield a dual benefit of improved cognition and enhancement of everyday function by introducing useful skills. Design and Methods: A total of 54 older adults (age 60-90) committed 15 hr/week for 3 months. Eighteen participants received extensive iPad training, learning a broad range of practical applications. The iPad group was compared with 2 separate controls: a Placebo group that engaged in passive tasks requiring little new learning; and a Social group that had regular social interaction, but no active skill acquisition. All participants completed the same cognitive battery pre- and post-engagement. Results: Compared with both controls, the iPad group showed greater improvements in episodic memory and processing speed but did not differ in mental control or visuospatial processing. Implications: iPad training improved cognition relative to engaging in social or nonchallenging activities. Mastering relevant technological devices have the added advantage of providing older adults with technological skills useful in facilitating everyday activities (e.g., banking). This work informs the selection of targeted activities for future interventions and community programs. PMID:24928557
Computational modeling of heterogeneity and function of CD4+ T cells
Carbo, Adria; Hontecillas, Raquel; Andrew, Tricity; Eden, Kristin; Mei, Yongguo; Hoops, Stefan; Bassaganya-Riera, Josep
2014-01-01
The immune system is composed of many different cell types and hundreds of intersecting molecular pathways and signals. This large biological complexity requires coordination between distinct pro-inflammatory and regulatory cell subsets to respond to infection while maintaining tissue homeostasis. CD4+ T cells play a central role in orchestrating immune responses and in maintaining a balance between pro- and anti- inflammatory responses. This tight balance between regulatory and effector reactions depends on the ability of CD4+ T cells to modulate distinct pathways within large molecular networks, since dysregulated CD4+ T cell responses may result in chronic inflammatory and autoimmune diseases. The CD4+ T cell differentiation process comprises an intricate interplay between cytokines, their receptors, adaptor molecules, signaling cascades and transcription factors that help delineate cell fate and function. Computational modeling can help to describe, simulate, analyze, and predict some of the behaviors in this complicated differentiation network. This review provides a comprehensive overview of existing computational immunology methods as well as novel strategies used to model immune responses with a particular focus on CD4+ T cell differentiation. PMID:25364738
Computation of the response functions of spiral waves in active media.
Biktasheva, I V; Barkley, D; Biktashev, V N; Bordyugov, G V; Foulkes, A J
2009-05-01
Rotating spiral waves are a form of self-organization observed in spatially extended systems of physical, chemical, and biological natures. A small perturbation causes gradual change in spatial location of spiral's rotation center and frequency, i.e., drift. The response functions (RFs) of a spiral wave are the eigenfunctions of the adjoint linearized operator corresponding to the critical eigenvalues lambda=0,+/-iomega. The RFs describe the spiral's sensitivity to small perturbations in the way that a spiral is insensitive to small perturbations where its RFs are close to zero. The velocity of a spiral's drift is proportional to the convolution of RFs with the perturbation. Here we develop a regular and generic method of computing the RFs of stationary rotating spirals in reaction-diffusion equations. We demonstrate the method on the FitzHugh-Nagumo system and also show convergence of the method with respect to the computational parameters, i.e., discretization steps and size of the medium. The obtained RFs are localized at the spiral's core.
Snyder, Abigail C.; Jiao, Yu
2010-10-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Magesh, R; George Priya Doss, C
2014-12-01
Ornithine transcarbamylase (OTC) (E.C. 2.1.3.3) is one of the enzymes in the urea cycle, which involves in a sequence of reactions in the liver cells. During protein assimilation in our body surplus nitrogen is made, this open nitrogen is altered into urea and expelled out of the body by kidneys, in this cycle OTC helps in the conversion of free toxic nitrogen into urea. Ornithine transcarbamylase deficiency (OTCD: OMIM#311250) is triggered by mutation in this OTC gene. To date more than 200 mutations have been noted. Mutation in OTC gene indicates alteration in enzyme production, which upsets the ability to carry out the chemical reaction. The computational analysis was initiated to identify the deleterious nsSNPs in OTC gene in causing OTCD using five different computational tools such as SIFT, PolyPhen 2, I-Mutant 3, SNPs&Go, and PhD-SNP. Studies on the molecular basis of OTC gene and OTCD have been done partially till date. Hence, in silico categorization of functional SNPs in OTC gene can provide valuable insight in near future in the diagnosis and treatment of OTCD.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
Aguilera-Pesantes, Daniel; Méndez, Miguel A
2017-02-08
While Zika virus (ZIKV) outbreaks are a growing concern for global health, a deep understanding about the virus is lacking. Here we report a contribution to the basic science on the virus- a detailed computational analysis of the non structural protein NS2b. This protein acts as a cofactor for the NS3 protease (NS3Pro) domain that is important on the viral life cycle, and is an interesting target for drug development. We found that ZIKV NS2b cofactor is highly similar to other virus within the Flavivirus genus, especially to West Nile Virus, suggesting that it is completely necessary for the protease complex activity. Furthermore, the ZIKV NS2b has an important role to the function and stability of the ZIKV NS3 protease domain even when presents a low conservation score. In addition, ZIKV NS2b is mostly rigid, which could imply a non dynamic nature in substrate recognition. Finally, by performing a computational alanine scanning mutagenesis, we found that residues Gly 52 and Asp 83 in the NS2b could be important in substrate recognition.
Smallwood, D.O.
1995-08-07
It is shown that the usual method for computing the coherence functions (ordinary, partial, and multiple) for a general multiple-input/multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross spectral density matrix of the inputs and outputs. The modified form of Cholesky decomposition used is G{sub zz} = LCL{prime}, where G is the cross spectral density matrix of inputs and outputs, L is a lower; triangular matrix with ones on the diagonal, and C is a diagonal matrix, and the symbol {prime} denotes the conjugate transpose. If a diagonal element of C is zero, the off diagonal elements in the corresponding column of L are set to zero. It is shown that the results can be equivalently obtained using singular value decomposition (SVD) of G{sub zz}. The formulation as a SVD problem suggests a way to order the inputs when a natural physical order of the inputs is absent.
Contreras-García, J; Pendás, A Martín; Recio, J M; Silvi, B
2009-01-13
We present a novel computational procedure, general, automated, and robust, for the analysis of local and global properties of the electron localization function (ELF) in crystalline solids. Our algorithm successfully faces the two main shortcomings of the ELF analysis in crystals: (i) the automated identification and characterization of the ELF induced topology in periodic systems, which is impeded by the great number and concentration of critical points in crystalline cells, and (ii) the localization of the zero flux surfaces and subsequent integration of basins, whose difficulty is due to the diverse (in many occasions very flat or very steep) ELF profiles connecting the set of critical points. Application of the new code to representative crystals exhibiting different bonding patterns is carried out in order to show the performance of the algorithm and the conceptual possibilities offered by the complete characterization of the ELF topology in solids.
Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; Batra, Arunabh; Venkataraman, Latha; Negre, Christian F. A.; Brudvig, Gary W.; Crabtree, Robert H.; Schmuttenmaer, Charles A.; Batista, Victor S.
2015-11-03
Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findings are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.
de Almeida, Licurgo; Reiner, Seungdo J; Ennis, Matthew; Linster, Christiane
2015-01-01
Noradrenergic modulation from the locus coerulus is often associated with the regulation of sensory signal-to-noise ratio. In the olfactory system, noradrenergic modulation affects both bulbar and cortical processing, and has been shown to modulate the detection of low concentration stimuli. We here implemented a computational model of the olfactory bulb and piriform cortex, based on known experimental results, to explore how noradrenergic modulation in the olfactory bulb and piriform cortex interact to regulate odor processing. We show that as predicted by behavioral experiments in our lab, norepinephrine can play a critical role in modulating the detection and associative learning of very low odor concentrations. Our simulations show that bulbar norepinephrine serves to pre-process odor representations to facilitate cortical learning, but not recall. We observe the typical non-uniform dose-response functions described for norepinephrine modulation and show that these are imposed mainly by bulbar, but not cortical processing.
2016-01-01
Covering: 2003 to 2016 The last decade has seen the first major discoveries regarding the genomic basis of plant natural product biosynthetic pathways. Four key computationally driven strategies have been developed to identify such pathways, which make use of physical clustering, co-expression, evolutionary co-occurrence and epigenomic co-regulation of the genes involved in producing a plant natural product. Here, we discuss how these approaches can be used for the discovery of plant biosynthetic pathways encoded by both chromosomally clustered and non-clustered genes. Additionally, we will discuss opportunities to prioritize plant gene clusters for experimental characterization, and end with a forward-looking perspective on how synthetic biology technologies will allow effective functional reconstitution of candidate pathways using a variety of genetic systems. PMID:27321668
Localization of functional adrenal tumors by computed tomography and venous sampling
Dunnick, N.R.; Doppman, J.L.; Gill, J.R. Jr.; Strott, C.A.; Keiser, H.R.; Brennan, M.F.
1982-02-01
Fifty-eight patients with functional lesions of the adrenal glands underwent radiographic evaluation. Twenty-eight patients had primary aldosteronism (Conn syndrome), 20 had Cushing syndrome, and 10 had pheochromocytoma. Computed tomography (CT) correctly identified adrenal tumors in 11 (61%) of 18 patients with aldosteronomas, 6 of 6 patients with benign cortisol-producing adrenal tumors, and 5 (83%) of 6 patients with pheochromocytomas. No false-positive diagnoses were encountered among patients with adrenal adenomas. Bilateral adrenal hyperplasia appeared on CT scans as normal or prominent adrenal glands with a normal configuration; however, CT was not able to exclude the presence of small adenomas. Adrenal venous sampling was correct in each case, and reliably distinguished adrenal tumors from hyperplasia. Recurrent pheochromocytomas were the most difficult to loclize on CT due to the surgical changes in the region of the adrenals and the frequent extra-adrenal locations.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
Computing frequency by using generalized zero-crossing applied to intrinsic mode functions
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2006-01-01
This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.
On the Exact Evaluation of Certain Instances of the Potts Partition Function by Quantum Computers
NASA Astrophysics Data System (ADS)
Geraci, Joseph; Lidar, Daniel A.
2008-05-01
We present an efficient quantum algorithm for the exact evaluation of either the fully ferromagnetic or anti-ferromagnetic q-state Potts partition function Z for a family of graphs related to irreducible cyclic codes. This problem is related to the evaluation of the Jones and Tutte polynomials. We consider the connection between the weight enumerator polynomial from coding theory and Z and exploit the fact that there exists a quantum algorithm for efficiently estimating Gauss sums in order to obtain the weight enumerator for a certain class of linear codes. In this way we demonstrate that for a certain class of sparse graphs, which we call Irreducible Cyclic Cocycle Code (ICCCɛ) graphs, quantum computers provide a polynomial speed up in the difference between the number of edges and vertices of the graph, and an exponential speed up in q, over the best classical algorithms known to date.
Ding, Wendu; Koepf, Matthieu; Koenigsmann, Christopher; ...
2015-11-03
Here, we report a systematic computational search of molecular frameworks for intrinsic rectification of electron transport. The screening of molecular rectifiers includes 52 molecules and conformers spanning over 9 series of structural motifs. N-Phenylbenzamide is found to be a promising framework with both suitable conductance and rectification properties. A targeted screening performed on 30 additional derivatives and conformers of N-phenylbenzamide yielded enhanced rectification based on asymmetric functionalization. We demonstrate that electron-donating substituent groups that maintain an asymmetric distribution of charge in the dominant transport channel (e.g., HOMO) enhance rectification by raising the channel closer to the Fermi level. These findingsmore » are particularly valuable for the design of molecular assemblies that could ensure directionality of electron transport in a wide range of applications, from molecular electronics to catalytic reactions.« less
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously
Zhou, Peng; Yang, Chao; Ren, Yanrong; Wang, Congcong; Tian, Feifei
2013-12-01
Peptides with antihypertensive potency have long been attractive to the medical and food communities. However, serving as food additives, rather than therapeutic agents, peptides should have a good taste. In the present study, we explore the intrinsic relationship between the angiotensin I-converting enzyme (ACE) inhibition and bitterness of short peptides in the framework of computational peptidology, attempting to find out the appropriate properties for functional food peptides with satisfactory bioactivities. As might be expected, quantitative structure-activity relationship modeling reveals a significant positive correlation between the ACE inhibition and bitterness of dipeptides, but this correlation is quite modest for tripeptides and, particularly, tetrapeptides. Moreover, quantum mechanics/molecular mechanics analysis of the structural basis and energetic profile involved in ACE-peptide complexes unravels that peptides of up to 4 amino acids long are sufficient to have efficient binding to ACE, and more additional residues do not bring with substantial enhance in their ACE-binding affinity and, thus, antihypertensive capability. All of above, it is coming together to suggest that the tripeptides and tetrapeptides could be considered as ideal candidates for seeking potential functional food additives with both high antihypertensive activity and low bitterness.
Watanabe, H; Honda, E; Kurabayashi, T
2010-01-01
Objectives The aim was to investigate the possibility of evaluating the modulation transfer function (MTF) of cone beam CT (CBCT) for dental use using the oversampling method. Methods The CBCT apparatus (3D Accuitomo) with an image intensifier was used with a 100 μm tungsten wire placed inside the scanner at a slight angle to the plane perpendicular to the plane of interest and scanned. 200 contiguous reconstructed images were used to obtain the oversampling line-spread function (LSF). The MTF curve was obtained by computing the Fourier transformation from the oversampled LSF. Line pair tests were also performed using Catphan®. Results The oversampling method provided smooth and reproducible MTF curves. The MTF curves revealed that the spatial resolution in the z-axis direction was significantly higher than that in the axial direction. This result was also confirmed by the line pair test. Conclusions MTF analysis was performed successfully using the oversampling method. In addition, this study clarified that the 3D Accuitomo had high spatial resolution, especially in the z-axis direction. PMID:20089741
NASA Astrophysics Data System (ADS)
Lopez-Encarnacion, Juan M.
2016-06-01
In this talk, the power and synergy of combining experimental measurements with density functional theory computations as a single tool to unambiguously characterize the molecular structure of complex atomic systems is shown. Here, we bring three beautiful cases where the interaction between the experiment and theory is in very good agreement for both finite and extended systems: 1) Characterizing Metal Coordination Environments in Porous Organic Polymers: A Joint Density Functional Theory and Experimental Infrared Spectroscopy Study 2) Characterization of Rhenium Compounds Obtained by Electrochemical Synthesis After Aging Process and 3) Infrared Study of H(D)2 + Co4+ Chemical Reaction: Characterizing Molecular Structures. J.M. López-Encarnación, K.K. Tanabe, M.J.A. Johnson, J. Jellinek, Chemistry-A European Journal 19 (41), 13646-13651 A. Vargas-Uscategui, E. Mosquera, J.M. López-Encarnación, B. Chornik, R. S. Katiyar, L. Cifuentes, Journal of Solid State Chemistry 220, 17-21
Brain-Computer Interface Controlled Functional Electrical Stimulation System for Ankle Movement
2011-01-01
Background Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI) is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. Methods A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Results Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77) with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions), and one subject had a single false alarm. Conclusions This study suggests that the integration of a noninvasive BCI with a lower-extremity FES system is
Ni, Pengsheng; McDonough, Christine M.; Jette, Alan M.; Bogusz, Kara; Marfeo, Elizabeth E.; Rasch, Elizabeth K.; Brandt, Diane E.; Meterko, Mark; Chan, Leighton
2014-01-01
Objectives To develop and test an instrument to assess physical function (PF) for Social Security Administration (SSA) disability programs, the SSA-PF. Item Response Theory (IRT) analyses were used to 1) create a calibrated item bank for each of the factors identified in prior factor analyses, 2) assess the fit of the items within each scale, 3) develop separate Computer-Adaptive Test (CAT) instruments for each scale, and 4) conduct initial psychometric testing. Design Cross-sectional data collection; IRT analyses; CAT simulation. Setting Telephone and internet survey. Participants Two samples: 1,017 SSA claimants, and 999 adults from the US general population. Interventions None. Main Outcome Measure Model fit statistics, correlation and reliability coefficients, Results IRT analyses resulted in five unidimensional SSA-PF scales: Changing & Maintaining Body Position, Whole Body Mobility, Upper Body Function, Upper Extremity Fine Motor, and Wheelchair Mobility for a total of 102 items. High CAT accuracy was demonstrated by strong correlations between simulated CAT scores and those from the full item banks. Comparing the simulated CATs to the full item banks, very little loss of reliability or precision was noted, except at the lower and upper ranges of each scale. No difference in response patterns by age or sex was noted. The distributions of claimant scores were shifted to the lower end of each scale compared to those of a sample of US adults. Conclusions The SSA-PF instrument contributes important new methodology for measuring the physical function of adults applying to the SSA disability programs. Initial evaluation revealed that the SSA-PF instrument achieved considerable breadth of coverage in each content domain and demonstrated noteworthy psychometric properties. PMID:23578594
Indices of cognitive function measured in rugby union players using a computer-based test battery.
MacDonald, Luke A; Minahan, Clare L
2016-09-01
The purpose of this study was to investigate the intra- and inter-day reliability of cognitive performance using a computer-based test battery in team-sport athletes. Eighteen elite male rugby union players (age: 19 ± 0.5 years) performed three experimental trials (T1, T2 and T3) of the test battery: T1 and T2 on the same day and T3, on the following day, 24 h later. The test battery comprised of four cognitive tests assessing the cognitive domains of executive function (Groton Maze Learning Task), psychomotor function (Detection Task), vigilance (Identification Task), visual learning and memory (One Card Learning Task). The intraclass correlation coefficients (ICCs) for the Detection Task, the Identification Task and the One Card Learning Task performance variables ranged from 0.75 to 0.92 when comparing T1 to T2 to assess intraday reliability, and 0.76 to 0.83 when comparing T1 and T3 to assess inter-day reliability. The ICCs for the Groton Maze Learning Task intra- and inter-day reliability were 0.67 and 0.57, respectively. We concluded that the Detection Task, the Identification Task and the One Card Learning Task are reliable measures of psychomotor function, vigilance, visual learning and memory in rugby union players. The reliability of the Groton Maze Learning Task is questionable (mean coefficient of variation (CV) = 19.4%) and, therefore, results should be interpreted with caution.
da Silva, Silvia Maria Doria; Paschoal, Ilma Aparecida; De Capitani, Eduardo Mello; Moreira, Marcos Mello; Palhares, Luciana Campanatti; Pereira, Mônica Corso
2016-01-01
Background Computed tomography (CT) phenotypic characterization helps in understanding the clinical diversity of chronic obstructive pulmonary disease (COPD) patients, but its clinical relevance and its relationship with functional features are not clarified. Volumetric capnography (VC) uses the principle of gas washout and analyzes the pattern of CO2 elimination as a function of expired volume. The main variables analyzed were end-tidal concentration of carbon dioxide (ETCO2), Slope of phase 2 (Slp2), and Slope of phase 3 (Slp3) of capnogram, the curve which represents the total amount of CO2 eliminated by the lungs during each breath. Objective To investigate, in a group of patients with severe COPD, if the phenotypic analysis by CT could identify different subsets of patients, and if there was an association of CT findings and functional variables. Subjects and methods Sixty-five patients with COPD Gold III–IV were admitted for clinical evaluation, high-resolution CT, and functional evaluation (spirometry, 6-minute walk test [6MWT], and VC). The presence and profusion of tomography findings were evaluated, and later, the patients were identified as having emphysema (EMP) or airway disease (AWD) phenotype. EMP and AWD groups were compared; tomography findings scores were evaluated versus spirometric, 6MWT, and VC variables. Results Bronchiectasis was found in 33.8% and peribronchial thickening in 69.2% of the 65 patients. Structural findings of airways had no significant correlation with spirometric variables. Air trapping and EMP were strongly correlated with VC variables, but in opposite directions. There was some overlap between the EMP and AWD groups, but EMP patients had signicantly lower body mass index, worse obstruction, and shorter walked distance on 6MWT. Concerning VC, EMP patients had signicantly lower ETCO2, Slp2 and Slp3. Increases in Slp3 characterize heterogeneous involvement of the distal air spaces, as in AWD. Conclusion Visual assessment and
NASA Astrophysics Data System (ADS)
Sutter, Kiplangat
This thesis illustrates the utilization of Density functional theory (DFT) in calculations of gas and solution phase Nuclear Magnetic Resonance (NMR) properties of light and heavy nuclei. Computing NMR properties is still a challenge and there are many unknown factors that are still being explored. For instance, influence of hydrogen-bonding; thermal motion; vibration; rotation and solvent effects. In one of the theoretical studies of 195Pt NMR chemical shift in cisplatin and its derivatives illustrated in Chapter 2 and 3 of this thesis. The importance of representing explicit solvent molecules explicitly around the Pt center in cisplatin complexes was outlined. In the same complexes, solvent effect contributed about half of the J(Pt-N) coupling constant. Indicating the significance of considering the surrounding solvent molecules in elucidating the NMR measurements of cisplatin binding to DNA. In chapter 4, we explore the Spin-Orbit (SO) effects on the 29Si and 13C chemical shifts induced by surrounding metal and ligands. The unusual Ni, Pd, Pt trends in SO effects to the 29Si in metallasilatrane complexes X-Si-(mu-mt)4-M-Y was interpreted based on electronic and relativistic effects rather than by structural differences between the complexes. In addition, we develop a non-linear model for predicting NMR SO effects in a series of organics bonded to heavy nuclei halides. In chapter 5, we extend the idea of "Chemist's orbitals" LMO analysis to the quantum chemical proton NMR computation of systems with internal resonance-assisted hydrogen bonds. Consequently, we explicitly link the relationship between the NMR parameters related to H-bonded systems and intuitive picture of a chemical bond from quantum calculations. The analysis shows how NMR signatures characteristic of H-bond can be explained by local bonding and electron delocalization concepts. One shortcoming of some of the anti-cancer agents like cisplatin is that they are toxic and researchers are looking for
Amyotrophic lateral sclerosis progression and stability of brain-computer interface communication.
Silvoni, Stefano; Cavinato, Marianna; Volpato, Chiara; Ruf, Carolin A; Birbaumer, Niels; Piccione, Francesco
2013-09-01
Our objective was to investigate the relationship between brain-computer interface (BCI) communication skill and disease progression in amyotrophic lateral sclerosis (ALS). We sought also to assess stability of BCI communication performance over time and whether it is related to the progression of neurological impairment before entering the locked-in state. A three years follow-up, BCI evaluation in a group of ALS patients (n = 24) was conducted. For a variety of reasons only three patients completed the three years follow-up. BCI communication skill and disability level, using the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised, were assessed at admission and at each of the three follow-ups. Multiple non-parametric statistical methods were used to ensure reliability of the dependent variables: correlations, paired test and factor analysis of variance. Results demonstrated no significant relationship between BCI communication skill (BCI-CS) and disease evolution. The patients who performed the follow-up evaluations preserved their BCI-CS over time. Patients' age at admission correlated positively with the ability to achieve control over a BCI. In conclusion, disease evolution in ALS does not affect the ability to control a BCI for communication. BCI performance can be maintained in the different stages of the illness.
Supervised learning with decision tree-based methods in computational and systems biology.
Geurts, Pierre; Irrthum, Alexandre; Wehenkel, Louis
2009-12-01
At the intersection between artificial intelligence and statistics, supervised learning allows algorithms to automatically build predictive models from just observations of a system. During the last twenty years, supervised learning has been a tool of choice to analyze the always increasing and complexifying data generated in the context of molecular biology, with successful applications in genome annotation, function prediction, or biomarker discovery. Among supervised learning methods, decision tree-based methods stand out as non parametric methods that have the unique feature of combining interpretability, efficiency, and, when used in ensembles of trees, excellent accuracy. The goal of this paper is to provide an accessible and comprehensive introduction to this class of methods. The first part of the review is devoted to an intuitive but complete description of decision tree-based methods and a discussion of their strengths and limitations with respect to other supervised learning methods. The second part of the review provides a survey of their applications in the context of computational and systems biology.
Response functions for computing absorbed dose to skeletal tissues from neutron irradiation.
Bahadori, Amir A; Johnson, Perry; Jokisch, Derek W; Eckerman, Keith F; Bolch, Wesley E
2011-11-07
Spongiosa in the adult human skeleton consists of three tissues-active marrow (AM), inactive marrow (IM) and trabecularized mineral bone (TB). AM is considered to be the target tissue for assessment of both long-term leukemia risk and acute marrow toxicity following radiation exposure. The total shallow marrow (TM(50)), defined as all tissues lying within the first 50 µm of the bone surfaces, is considered to be the radiation target tissue of relevance for radiogenic bone cancer induction. For irradiation by sources external to the body, kerma to homogeneous spongiosa has been used as a surrogate for absorbed dose to both of these tissues, as direct dose calculations are not possible using computational phantoms with homogenized spongiosa. Recent micro-CT imaging of a 40 year old male cadaver has allowed for the accurate modeling of the fine microscopic structure of spongiosa in many regions of the adult skeleton (Hough et al 2011 Phys. Med. Biol. 56 2309-46). This microstructure, along with associated masses and tissue compositions, was used to compute specific absorbed fraction (SAF) values for protons originating in axial and appendicular bone sites (Jokisch et al 2011 Phys. Med. Biol. 56 6857-72). These proton SAFs, bone masses, tissue compositions and proton production cross sections, were subsequently used to construct neutron dose-response functions (DRFs) for both AM and TM(50) targets in each bone of the reference adult male. Kerma conditions were assumed for other resultant charged particles. For comparison, AM, TM(50) and spongiosa kerma coefficients were also calculated. At low incident neutron energies, AM kerma coefficients for neutrons correlate well with values of the AM DRF, while total marrow (TM) kerma coefficients correlate well with values of the TM(50) DRF. At high incident neutron energies, all kerma coefficients and DRFs tend to converge as charged-particle equilibrium is established across the bone site. In the range of 10 eV to 100 Me
Evaluation of pulmonary function using single-breath-hold dual-energy computed tomography with xenon
Kyoyama, Hiroyuki; Hirata, Yusuke; Kikuchi, Satoshi; Sakai, Kosuke; Saito, Yuriko; Mikami, Shintaro; Moriyama, Gaku; Yanagita, Hisami; Watanabe, Wataru; Otani, Katharina; Honda, Norinari; Uematsu, Kazutsugu
2017-01-01
Abstract Xenon-enhanced dual-energy computed tomography (xenon-enhanced CT) can provide lung ventilation maps that may be useful for assessing structural and functional abnormalities of the lung. Xenon-enhanced CT has been performed using a multiple-breath-hold technique during xenon washout. We recently developed xenon-enhanced CT using a single-breath-hold technique to assess ventilation. We sought to evaluate whether xenon-enhanced CT using a single-breath-hold technique correlates with pulmonary function testing (PFT) results. Twenty-six patients, including 11 chronic obstructive pulmonary disease (COPD) patients, underwent xenon-enhanced CT and PFT. Three of the COPD patients underwent xenon-enhanced CT before and after bronchodilator treatment. Images from xenon-CT were obtained by dual-source CT during a breath-hold after a single vital-capacity inspiration of a xenon–oxygen gas mixture. Image postprocessing by 3-material decomposition generated conventional CT and xenon-enhanced images. Low-attenuation areas on xenon images matched low-attenuation areas on conventional CT in 21 cases but matched normal-attenuation areas in 5 cases. Volumes of Hounsfield unit (HU) histograms of xenon images correlated moderately and highly with vital capacity (VC) and total lung capacity (TLC), respectively (r = 0.68 and 0.85). Means and modes of histograms weakly correlated with VC (r = 0.39 and 0.38), moderately with forced expiratory volume in 1 second (FEV1) (r = 0.59 and 0.56), weakly with the ratio of FEV1 to FVC (r = 0.46 and 0.42), and moderately with the ratio of FEV1 to its predicted value (r = 0.64 and 0.60). Mode and volume of histograms increased in 2 COPD patients after the improvement of FEV1 with bronchodilators. Inhalation of xenon gas caused no adverse effects. Xenon-enhanced CT using a single-breath-hold technique depicted functional abnormalities not detectable on thin-slice CT. Mode, mean, and volume of HU histograms of xenon images
2007-07-01
functions for the TRIPS compiler. The experiments were executed on the Rose- Hulman Institute of Technology Beowulf cluster. The primary metric...parameter for this benchmark • Implemented a parallel version of Finch on a Beowulf cluster using the Message Passing Interface (MPI) • Completed a 17... Beowulf ” Linux cluster (brain.rose-hulman.edu). However, the Beowulf cluster does not provide NFS and PBS, so it was also necessary to modify
Lee, Yun; Escamilla-Treviño, Luis; Dixon, Richard A.; Voit, Eberhard O.
2012-01-01
Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G) and syringyl (S) lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid. PMID:23144605
NASA Astrophysics Data System (ADS)
Troy, R. M.
2005-12-01
and functions may be integrated into a system efficiently, with minimal effort, and with an eye toward an eventual Computational Unification of the Earth Sciences. A fundamental to such systems is meta-data which describe not only the content of data but also how intricate relationships are represented and used to good advantage. Retrieval techniques will be discussed including trade-offs in using externally managed meta-data versus embedded meta-data, how the two may be integrated, and how "simplifying assumptions" may or may not actually be helpful. The perspectives presented in this talk or poster session are based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, which sought to unify NASA's Mission To Planet Earth's EOS-DIS, and on-going experience developed by Science Tools corporation, of which the author is a principal. NOTE: These ideas are most easily shared in the form of a talk, and we suspect that this session will generate a lot of interest. We would therefore prefer to have this session accepted as a talk as opposed to a poster session.
A Computational Model Quantifies the Effect of Anatomical Variability on Velopharyngeal Function
Inouye, Joshua M.; Perry, Jamie L.; Lin, Kant Y.
2015-01-01
Purpose This study predicted the effects of velopharyngeal (VP) anatomical parameters on VP function to provide a greater understanding of speech mechanics and aid in the treatment of speech disorders. Method We created a computational model of the VP mechanism using dimensions obtained from magnetic resonance imaging measurements of 10 healthy adults. The model components included the levator veli palatini (LVP), the velum, and the posterior pharyngeal wall, and the simulations were based on material parameters from the literature. The outcome metrics were the VP closure force and LVP muscle activation required to achieve VP closure. Results Our average model compared favorably with experimental data from the literature. Simulations of 1,000 random anatomies reflected the large variability in closure forces observed experimentally. VP distance had the greatest effect on both outcome metrics when considering the observed anatomic variability. Other anatomical parameters were ranked by their predicted influences on the outcome metrics. Conclusions Our results support the implication that interventions for VP dysfunction that decrease anterior to posterior VP portal distance, increase velar length, and/or increase LVP cross-sectional area may be very effective. Future modeling studies will help to further our understanding of speech mechanics and optimize treatment of speech disorders. PMID:26049120
Blanchet, Marc-Frédérick; St-Onge, Karine; Lisi, Véronique; Robitaille, Julie; Hamel, Sylvie; Major, François
2014-01-01
Anti-infection drugs target vital functions of infectious agents, including their ribosome and other essential non-coding RNAs. One of the reasons infectious agents become resistant to drugs is due to mutations that eliminate drug-binding affinity while maintaining vital elements. Identifying these elements is based on the determination of viable and lethal mutants and associated structures. However, determining the structure of enough mutants at high resolution is not always possible. Here, we introduce a new computational method, MC-3DQSAR, to determine the vital elements of target RNA structure from mutagenesis and available high-resolution data. We applied the method to further characterize the structural determinants of the bacterial 23S ribosomal RNA sarcin–ricin loop (SRL), as well as those of the lead-activated and hammerhead ribozymes. The method was accurate in confirming experimentally determined essential structural elements and predicting the viability of new SRL variants, which were either observed in bacteria or validated in bacterial growth assays. Our results indicate that MC-3DQSAR could be used systematically to evaluate the drug-target potentials of any RNA sites using current high-resolution structural data. PMID:25200082
Hetzroni, Orit E; Tannous, Juman
2004-04-01
This study investigated the use of computer-based intervention for enhancing communication functions of children with autism. The software program was developed based on daily life activities in the areas of play, food, and hygiene. The following variables were investigated: delayed echolalia, immediate echolalia, irrelevant speech, relevant speech, and communicative initiations. Multiple-baseline design across settings was used to examine the effects of the exposure of five children with autism to activities in a structured and controlled simulated environment on the communication manifested in their natural environment. Results indicated that after exposure to the simulations, all children produced fewer sentences with delayed and irrelevant speech. Most of the children engaged in fewer sentences involving immediate echolalia and increased the number of communication intentions and the amount of relevant speech they produced. Results indicated that after practicing in a controlled and structured setting that provided the children with opportunities to interact in play, food, and hygiene activities, the children were able to transfer their knowledge to the natural classroom environment. Implications and future research directions are discussed.
Hu, Haixiang; Zhang, Xin; Ford, Virginia; Luo, Xiao; Qi, Erhui; Zeng, Xuefeng; Zhang, Xuejun
2016-11-14
Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the 'heterocercal' tool influence function (TIF). Generated from compound motion equipment, this type of TIF can 'transfer' the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the 'heterocercal' TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope.
Heintzen, P H; Brennecke, R; Bürsch, J H; Hahne, H J; Lange, P E; Moldenhauer, K; Onnasch, D; Radtke, W
1982-07-01
A survey of the evolution of roentgen-video-computer techniques is given which was initiated by the development of videodensitometry by Wood and his associates. Following fundamental studies of the usefulness and limitations of x-ray equipment for quantitative measurements and the applicability of the Lambert-Beers law to x-ray absorption, videodensitometry has been used experimentally and clinically for various circulatory studies and has proved to be particularly valuable for the quantitation of aortic, pulmonic, and mitral valvular regurgitation. The second offspring of these techniques, so-called videometry, uses dimensional measurements from single and biplane angiocardiograms for the assessment of size, shape, and contraction pattern of the heart chambers. Volumes of the right and left ventricles can be determined clinically with a standard error of estimate below 10%. On the basis of these studies, normal values have been derived for all age groups, and they depict geometric changes of the growing heart. Cardiac index and ejection fractions proved to be age-independent biologic constants. Finally, methods for complete digital processing of video-image sequences in an off-line and real-time mode are described which allow digital image storage and documentation, dynamic background subtraction for contrast enhancement, and intravenous angiocardiography, in addition to functional imaging by parameter extraction from a matrix of pixel densitograms. Wall thickness and motion determinations, regional flow distribution measurements, and various image-composition techniques are also feasible.
Computed versus measured ion velocity distribution functions in a Hall effect thruster
Garrigues, L.; Mazouffre, S.; Bourgeois, G.
2012-06-01
We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion mean velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.
Babkirk, Sarah; Luehring-Jones, Peter; Dennis-Tiwary, Tracy A
2016-12-01
The use of computer-mediated communication (CMC) as a form of social interaction has become increasingly prevalent, yet few studies examine individual differences that may shed light on implications of CMC for adjustment. The current study examined neurocognitive individual differences associated with preferences to use technology in relation to social-emotional outcomes. In Study 1 (N = 91), a self-report measure, the Social Media Communication Questionnaire (SMCQ), was evaluated as an assessment of preferences for communicating positive and negative emotions on a scale ranging from purely via CMC to purely face-to-face. In Study 2, SMCQ preferences were examined in relation to event-related potentials (ERPs) associated with early emotional attention capture and reactivity (the frontal N1) and later sustained emotional processing and regulation (the late positive potential (LPP)). Electroencephalography (EEG) was recorded while 22 participants passively viewed emotional and neutral pictures and completed an emotion regulation task with instructions to increase, decrease, or maintain their emotional responses. A greater preference for CMC was associated with reduced size of and satisfaction with social support, greater early (N1) attention capture by emotional stimuli, and reduced LPP amplitudes to unpleasant stimuli in the increase emotion regulatory task. These findings are discussed in the context of possible emotion- and social-regulatory functions of CMC.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; Siegel, Howard Jay; Maciejewski, Anthony A.; Koenig, Gregory A.; Groer, Christopher S.; Hilton, Marcia M.; Poole, Stephen W.; Okonski, G.; Rambharos, R.
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop low utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.
Utility functions and resource management in an oversubscribed heterogeneous computing environment
Khemka, Bhavesh; Friese, Ryan; Briceno, Luis Diego; ...
2014-09-26
We model an oversubscribed heterogeneous computing system where tasks arrive dynamically and a scheduler maps the tasks to machines for execution. The environment and workloads are based on those being investigated by the Extreme Scale Systems Center at Oak Ridge National Laboratory. Utility functions that are designed based on specifications from the system owner and users are used to create a metric for the performance of resource allocation heuristics. Each task has a time-varying utility (importance) that the enterprise will earn based on when the task successfully completes execution. We design multiple heuristics, which include a technique to drop lowmore » utility-earning tasks, to maximize the total utility that can be earned by completing tasks. The heuristics are evaluated using simulation experiments with two levels of oversubscription. The results show the benefit of having fast heuristics that account for the importance of a task and the heterogeneity of the environment when making allocation decisions in an oversubscribed environment. Furthermore, the ability to drop low utility-earning tasks allow the heuristics to tolerate the high oversubscription as well as earn significant utility.« less
Helie, Sebastien; Chakravarthy, Srinivasa; Moustafa, Ahmed A
2013-12-06
Many computational models of the basal ganglia (BG) have been proposed over the past twenty-five years. While computational neuroscience models have focused on closely matching the neurobiology of the BG, computational cognitive neuroscience (CCN) models have focused on how the BG can be used to implement cognitive and motor functions. This review article focuses on CCN models of the BG and how they use the neuroanatomy of the BG to account for cognitive and motor functions such as categorization, instrumental conditioning, probabilistic learning, working memory, sequence learning, automaticity, reaching, handwriting, and eye saccades. A total of 19 BG models accounting for one or more of these functions are reviewed and compared. The review concludes with a discussion of the limitations of existing CCN models of the BG and prescriptions for future modeling, including the need for computational models of the BG that can simultaneously account for cognitive and motor functions, and the need for a more complete specification of the role of the BG in behavioral functions.
Development of the Computer-Adaptive Version of the Late-Life Function and Disability Instrument
Tian, Feng; Kopits, Ilona M.; Moed, Richard; Pardasaney, Poonam K.; Jette, Alan M.
2012-01-01
Background. Having psychometrically strong disability measures that minimize response burden is important in assessing of older adults. Methods. Using the original 48 items from the Late-Life Function and Disability Instrument and newly developed items, a 158-item Activity Limitation and a 62-item Participation Restriction item pool were developed. The item pools were administered to a convenience sample of 520 community-dwelling adults 60 years or older. Confirmatory factor analysis and item response theory were employed to identify content structure, calibrate items, and build the computer-adaptive testings (CATs). We evaluated real-data simulations of 10-item CAT subscales. We collected data from 102 older adults to validate the 10-item CATs against the Veteran’s Short Form-36 and assessed test–retest reliability in a subsample of 57 subjects. Results. Confirmatory factor analysis revealed a bifactor structure, and multi-dimensional item response theory was used to calibrate an overall Activity Limitation Scale (141 items) and an overall Participation Restriction Scale (55 items). Fit statistics were acceptable (Activity Limitation: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.03; Participation Restriction: comparative fit index = 0.95, Tucker Lewis Index = 0.95, root mean square error approximation = 0.05). Correlation of 10-item CATs with full item banks were substantial (Activity Limitation: r = .90; Participation Restriction: r = .95). Test–retest reliability estimates were high (Activity Limitation: r = .85; Participation Restriction r = .80). Strength and pattern of correlations with Veteran’s Short Form-36 subscales were as hypothesized. Each CAT, on average, took 3.56 minutes to administer. Conclusions. The Late-Life Function and Disability Instrument CATs demonstrated strong reliability, validity, accuracy, and precision. The Late-Life Function and Disability Instrument CAT can achieve
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
ERIC Educational Resources Information Center
Lagrange, Jean-Baptiste; Psycharis, Giorgos
2014-01-01
The general goal of this paper is to explore the potential of computer environments for the teaching and learning of functions. To address this, different theoretical frameworks and corresponding research traditions are available. In this study, we aim to network different frameworks by following a "double analysis" method to analyse two…
Technology Transfer Automated Retrieval System (TEKTRAN)
High resolution x-ray computed tomography (HRCT) is a non-destructive diagnostic imaging technique with sub-micron resolution capability that is now being used to evaluate the structure and function of plant xylem network in three dimensions (3D). HRCT imaging is based on the same principles as medi...
Hansen, Randy R.; Bass, Robert B.; Kouzes, Richard T.; Mileson, Nicholas D.
2003-01-20
This paper provides a brief overview of the implementation of the Advanced Encryption Standard (AES) as a hash function for confirming the identity of software resident on a computer system. The PNNL Software Authentication team chose to use a hash function to confirm software identity on a system for situations where: (1) there is limited time to perform the confirmation and (2) access to the system is restricted to keyboard or thumbwheel input and output can only be displayed on a monitor. PNNL reviewed three popular algorithms: the Secure Hash Algorithm - 1 (SHA-1), the Message Digest - 5 (MD-5), and the Advanced Encryption Standard (AES) and selected the AES to incorporate in software confirmation tool we developed. This paper gives a brief overview of the SHA-1, MD-5, and the AES and sites references for further detail. It then explains the overall processing steps of the AES to reduce a large amount of generic data-the plain text, such is present in memory and other data storage media in a computer system, to a small amount of data-the hash digest, which is a mathematically unique representation or signature of the former that could be displayed on a computer's monitor. This paper starts with a simple definition and example to illustrate the use of a hash function. It concludes with a description of how the software confirmation tool uses the hash function to confirm the identity of software on a computer system.
ERIC Educational Resources Information Center
Meunier, Lydie E.
1994-01-01
Computer adaptive language testing (CALT) offers a variety of advantages; however, since CALT cannot test the multidimensional nature of language, it does not assess communicative/functional language. This article proposes to replace multiple choice and cloze formats and to apply CALT to live-action simulations. (18 references) (LB)
1977-02-01
both academic and industrial environments. However, many problems still require efficient solutions. One of these problem -areas that can have a...distribution of a data base system over several processors increases the complexity of the recovery problem. Just the interprocessor comunications ...DBMS over a computer network enormously complicates the data base administration function. If a recovery scheme similar to that proposed in this
ERIC Educational Resources Information Center
Montpetit, Kathleen; Haley, Stephen; Bilodeau, Nathalie; Ni, Pengsheng; Tian, Feng; Gorton, George, III; Mulcahey, M. J.
2011-01-01
This article reports on the content range and measurement precision of an upper extremity (UE) computer adaptive testing (CAT) platform of physical function in children with cerebral palsy. Upper extremity items representing skills of all abilities were administered to 305 parents. These responses were compared with two traditional standardized…
Computational and Functional Analyses of a Small-Molecule Binding Site in ROMK
Swale, Daniel R.; Sheehan, Jonathan H.; Banerjee, Sreedatta; Husni, Afeef S.; Nguyen, Thuy T.; Meiler, Jens; Denton, Jerod S.
2015-01-01
The renal outer medullary potassium channel (ROMK, or Kir1.1, encoded by KCNJ1) critically regulates renal tubule electrolyte and water transport and hence blood volume and pressure. The discovery of loss-of-function mutations in KCNJ1 underlying renal salt and water wasting and lower blood pressure has sparked interest in developing new classes of antihypertensive diuretics targeting ROMK. The recent development of nanomolar-affinity small-molecule inhibitors of ROMK creates opportunities for exploring the chemical and physical basis of ligand-channel interactions required for selective ROMK inhibition. We previously reported that the bis-nitro-phenyl ROMK inhibitor VU591 exhibits voltage-dependent knock-off at hyperpolarizing potentials, suggesting that the binding site is located within the ion-conduction pore. In this study, comparative molecular modeling and in silico ligand docking were used to interrogate the full-length ROMK pore for energetically favorable VU591 binding sites. Cluster analysis of 2498 low-energy poses resulting from 9900 Monte Carlo docking trajectories on each of 10 conformationally distinct ROMK comparative homology models identified two putative binding sites in the transmembrane pore that were subsequently tested for a role in VU591-dependent inhibition using site-directed mutagenesis and patch-clamp electrophysiology. Introduction of mutations into the lower site had no effect on the sensitivity of the channel to VU591. In contrast, mutations of Val168 or Asn171 in the upper site, which are unique to ROMK within the Kir channel family, led to a dramatic reduction in VU591 sensitivity. This study highlights the utility of computational modeling for defining ligand-ROMK interactions and proposes a mechanism for inhibition of ROMK. PMID:25762321
Op’t Holt, Bryan T.; Vance, Michael A.; Mirica, Liviu M.; Stack, T. Daniel P.; Solomon, Edward I.
2009-01-01
The μ-η2:η2-peroxodicopper(II) complex synthesized by reacting the Cu(I) complex of the bis-diamine ligand N,N′-di-tert-butyl-ethylenediamine (DBED) with O2 is a functional and spectroscopic model of the coupled binuclear copper protein tyrosinase. This complex reacts with 2,4-di-tert-butylphenolate at low temperature to produce a mixture of the catechol and quinone products, which proceeds through three intermediates (A – C) that have been characterized. A, stabilized at 153K, is characterized as a phenolate-bonded bis-μ-oxo dicopper(III) species, which proceeds at 193K to B, presumably a catecholate-bridged coupled bis-copper(II) species via an electrophilic aromatic substitution mechanism wherein aromatic ring distortion is the rate-limiting step. Isotopic labeling shows that the oxygen inserted into the aromatic substrate during hydroxylation derives from dioxygen, and a late-stage ortho-H+ transfer to an exogenous base is associated with C-O bond formation. Addition of a proton to B produces C, determined from resonance Raman spectra to be a Cu(II)-semiquinone complex. The formation of C (the oxidation of catecholate and reduction to Cu(I)) is governed by the protonation state of the distal bridging oxygen ligand of B. Parallels and contrasts are drawn between the spectroscopically and computationally supported mechanism of the DBED system, presented here, and the experimentally-derived mechanism of the coupled binuclear copper protein tyrosinase. PMID:19368383
NASA Astrophysics Data System (ADS)
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations
Sims, James S; George, William L; Griffin, Terence J; Hagedorn, John G; Hung, Howard K; Kelso, John T; Olano, Marc; Peskin, Adele P; Satterfield, Steven G; Terrill, Judith Devaney; Bryant, Garnett W; Diaz, Jose G
2008-01-01
This is the third in a series of articles that describe, through examples, how the Scientific Applications and Visualization Group (SAVG) at NIST has utilized high performance parallel computing, visualization, and machine learning to accelerate scientific discovery. In this article we focus on the use of high performance computing and visualization for simulations of nanotechnology.
NASA Astrophysics Data System (ADS)
Scheu, Norbert
1998-11-01
A non-perturbative computation of scHADRONIC STRUCTURE FUNCTIONS for deep inelastic lepton hadron scattering has not been achieved yet. In this thesis we investigate the viability of the Hamiltonian approach in order to compute hadronic structure functions. In the literature, the so- called scFRONT FORM (FF) approach is favoured over the conventional the scINSTANT FORM (IF)-the conventional Hamiltonian approach-due to claims (a) that structure functions are related to scLIGHT-LIKE CORRELATION FUNCTIONS and (b) that the front form is much simpler for numerical computations. We dispell both claims using general arguments as well as practical computations (in the case of the scSCALAR MODEL and scTWO-DIMENSIONAL QED) demonstrating (a) that structure functions are related to scSPACE-LIKE CORRELATIONS and that (b) the IF is better suited for practical computations if appropriate approximations are introduced. Moreover, we show that the FF is scUNPHYSICAL in general for reasons as follows: (1) the FF constitutes an scINCOMPLETE QUANTISATION of field theories (2) the FF 'predicts' an scINFINITE SPEED OF LIGHT in one space dimension, a scCOMPLETE BREAKDOWN OF MICROCAUSALITY and the scUBIQUITY OF TIME-TRAVEL. Additionally we demonstrate that the FF cannot be approached by so-called ɛ co-ordinates. We demonstrate that these co-ordinates are but the instant form in disguise. The FF cannot be legitimated to be an scEFFECTIVE THEORY. Finally, we demonstrate that the so- called scINFINITE MOMENTUM FRAME is neither physical nor equivalent to the FF.
NASA Astrophysics Data System (ADS)
Lambin, Ph.; Vigneron, J. P.
1984-03-01
The analytical tetrahedron method (ATM) for evaluating perfect-crystal Green's functions is reviewed. It is shown that the ATM allows for computing matrix elements of the resolvent operator in the entire complex-energy plane. These elements are written as a scalar product involving weighting functions of the complex energy, which are computed on a mesh of k--> points in the Brillouin zone. When the usual approximations are made within each tetrahedron, namely linear interpolations for the dispersion relations as well as for the numerator matrix elements, the weighting functions only depend on the perfect-crystal dispersion relations. In addition, the analytical expression obtained for a tetrahedral contribution to the weighting functions is simpler than what is usually expected. Analytical properties of our expressions are discussed and all the limiting forms are worked out. Special attention is paid to the numerical stability of the algorithm producing the Green's-function imaginary part on the real energy axis. Expressions which have been published earlier are subject to computational problems, which are solved in the new formulas reported here.
NASA Astrophysics Data System (ADS)
Turner, David M.; Niezgoda, Stephen R.; Kalidindi, Surya R.
2016-10-01
Chord length distributions (CLDs) and lineal path functions (LPFs) have been successfully utilized in prior literature as measures of the size and shape distributions of the important microscale constituents in the material system. Typically, these functions are parameterized only by line lengths, and thus calculated and derived independent of the angular orientation of the chord or line segment. We describe in this paper computationally efficient methods for estimating chord length distributions and lineal path functions for 2D (two dimensional) and 3D microstructure images defined on any number of arbitrary chord orientations. These so called fully angularly resolved distributions can be computed for over 1000 orientations on large microstructure images (5003 voxels) in minutes on modest hardware. We present these methods as new tools for characterizing microstructures in a statistically meaningful way.
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Panda, Ananya; Bhalla, Ashu Seith; Sharma, Raju; Mohan, Anant; Sreenivas, Vishnu; Kalaimannan, Umasankar; Upadhyay, Ashish Dutt
2016-01-01
Aims: To study the correlation between dyspnea, radiological findings, and pulmonary function tests (PFTs) in patients with sequelae of pulmonary tuberculosis (TB). Materials and Methods: Clinical history, chest computed tomography (CT), and PFT of patients with post-TB sequelae were recorded. Dyspnea was graded according to the Modified Medical Research Council (mMRC) scale. CT scans were analyzed for fibrosis, cavitation, bronchiectasis, consolidation, nodules, and aspergilloma. Semi-quantitative analysis was done for these abnormalities. Scores were added to obtain a total morphological score (TMS). The lungs were also divided into three zones and scores added to obtain the total lung score (TLS). Spirometry was done for forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and FEV1/FVC. Results: Dyspnea was present in 58/101 patients. A total of 22/58 patients had mMRC Grade 1, and 17/58 patients had Grades 2 and 3 dyspnea each. There was a significant difference in median fibrosis, bronchiectasis, nodules (P < 0.01) scores, TMS, and TLS (P < 0.0001) between dyspnea and nondyspnea groups. Significant correlations were obtained between grades of dyspnea and fibrosis (r = 0.34, P = 0.006), bronchiectasis (r = 0.35, P = 0.004), nodule (r = 0.24, P = 0.016) scores, TMS (r = 0.398, P = 0.000), and TLS (r = 0.35, P = 0.0003). PFTs were impaired in 78/101 (77.2%) patients. Restrictive defect was most common in 39.6% followed by mixed in 34.7%. There was a negative but statistically insignificant trend between PFT and fibrosis, bronchiectasis, nodule scores, TMS, and TLS. However, there were significant differences in median fibrosis, cavitation, and bronchiectasis scores in patients with normal, mild to moderate, and severe respiratory defects. No difference was seen in TMS and TLS according to the severity of the respiratory defect. Conclusion: Both fibrosis and bronchiectasis correlated with dyspnea and with PFT. However, this correlation was not
NASA Astrophysics Data System (ADS)
Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.
2005-11-01
The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.
Yamamoto, Tokihiro; Kabus, Sven; Berg, Jens von; Lorenz, Cristian; Keall, Paul J.
2011-01-01
Purpose: To quantify the dosimetric impact of four-dimensional computed tomography (4D-CT) pulmonary ventilation imaging-based functional treatment planning that avoids high-functional lung regions. Methods and Materials: 4D-CT ventilation images were created from 15 non-small-cell lung cancer patients using deformable image registration and quantitative analysis of the resultant displacement vector field. For each patient, anatomic and functional plans were created for intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT). Consistent beam angles and dose-volume constraints were used for all cases. The plans with Radiation Therapy Oncology Group (RTOG) 0617-defined major deviations were modified until clinically acceptable. Functional planning spared the high-functional lung, and anatomic planning treated the lungs as uniformly functional. We quantified the impact of functional planning compared with anatomic planning using the two- or one-tailed t test. Results: Functional planning led to significant reductions in the high-functional lung dose, without significantly increasing other critical organ doses, but at the expense of significantly degraded the planning target volume (PTV) conformity and homogeneity. The average reduction in the high-functional lung mean dose was 1.8 Gy for IMRT (p < .001) and 2.0 Gy for VMAT (p < .001). Significantly larger changes occurred in the metrics for patients with a larger amount of high-functional lung adjacent to the PTV. Conclusion: The results of the present study have demonstrated the impact of 4D-CT ventilation imaging-based functional planning for IMRT and VMAT for the first time. Our findings indicate the potential of functional planning in lung functional avoidance for both IMRT and VMAT, particularly for patients who have high-functional lung adjacent to the PTV.
Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck
2010-01-01
Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138
NASA Astrophysics Data System (ADS)
Curceac, S.; Ternynck, C.; Ouarda, T.
2015-12-01
Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed
Gillespie-Lynch, Kristen; Kapp, Steven K; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted
2014-12-01
An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to similar others, and the opportunity to express their true selves. They enjoyed using the Internet to meet others more, and to maintain connections with friends and family less, than did participants without ASD. People with ASD enjoyed aspects of computer-mediated communication that may be associated with special interests or advocacy, such as blogging, more than did participants without ASD. This study suggests that people with ASD may use the Internet in qualitatively different ways from those without ASD. Suggestions for interventions are discussed.
ERIC Educational Resources Information Center
Price, Kathleen J.
2011-01-01
The use of information technology is a vital part of everyday life, but for a person with functional impairments, technology interaction may be difficult at best. Information technology is commonly designed to meet the needs of a theoretical "normal" user. However, there is no such thing as a "normal" user. A user's capabilities will vary over…
ERIC Educational Resources Information Center
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng
2016-01-01
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
NASA Technical Reports Server (NTRS)
Vinokur, M.
1983-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent. Previously announced in STAR as N80-25055
NASA Technical Reports Server (NTRS)
Vinokur, M.
1979-01-01
The class of one-dimensional stretching functions used in finite-difference calculations is studied. For solutions containing a highly localized region of rapid variation, simple criteria for a stretching function are derived using a truncation error analysis. These criteria are used to investigate two types of stretching functions. One is an interior stretching function, for which the location and slope of an interior clustering region are specified. The simplest such function satisfying the criteria is found to be one based on the inverse hyperbolic sine. The other type of function is a two-sided stretching function, for which the arbitrary slopes at the two ends of the one-dimensional interval are specified. The simplest such general function is found to be one based on the inverse tangent.
Zhou, Jiajian; Zhang, Suyang; Wang, Huating; Sun, Hao
2017-04-04
Long noncoding RNAs (lncRNAs) are key regulators of diverse cellular processes. Recent advances in high-throughput sequencing have allowed for an unprecedented discovery of novel lncRNAs. To identify functional lncRNAs from thousands of candidates for further functional validation is still a challenging task. Here, we present a novel computational framework, lncFunNet (lncRNA Functional inference through integrated Network) that integrates ChIP-seq, CLIP-seq and RNA-seq data to predict, prioritize and annotate lncRNA functions. In mouse embryonic stem cells (mESCs), using lncFunNet we not only recovered most of the functional lncRNAs known to maintain mESC pluripotency but also predicted a plethora of novel functional lncRNAs. Similarly, in mouse myoblast C2C12 cells, applying lncFunNet led to prediction of reservoirs of functional lncRNAs in both proliferating myoblasts (MBs) and differentiating myotubes (MTs). Further analyses demonstrated that these lncRNAs are frequently bound by key transcription factors, interact with miRNAs and constitute key nodes in biological network motifs. Further experimentations validated their dynamic expression profiles and functionality during myoblast differentiation. Collectively, our studies demonstrate the use of lncFunNet to annotate and identify functional lncRNAs in a given biological system.
NASA Technical Reports Server (NTRS)
Gu, Chong; Bates, Douglas M.; Chen, Zehua; Wahba, Grace
1989-01-01
An efficient algorithm for computing the generalized cross-validation function for the general cross-validated regularization/smoothing problem is provided. This algorithm is appropriate for problems where no natural structure is available, and the regularization/smoothing problem is solved (exactly) in a reproducing kernel Hilbert space. It is particularly appropriate for certain multivariate smoothing problems with irregularly spaced data, and certain remote sensing problems, such as those that occur in meteorology, where the sensors are arranged irregularly. The algorithm is applied to the fitting of interaction spline models with irregularly spaced data and two smoothing parameters; favorable timing results are presented. The algorithm may be extended to the computation of certain generalized maximum likelihood (GML) functions. Application of the GML algorithm to a problem in numerical weather forecasting, and to a broad class of hypothesis testing problems, is noted.
NASA Astrophysics Data System (ADS)
Gil, Amparo; Segura, Javier; Temme, Nico M.
2003-04-01
The use of a uniform Airy-type asymptotic expansion for the computation of the modified Bessel functions of the third kind of imaginary orders (Kia(x)) near the transition point x=a, is discussed. In A. Gil et al., Evaluation of the modified Bessel functions of the third kind of imaginary orders, J. Comput. Phys. 17 (2002) 398-411, an algorithm for the evaluation of Kia(x) was presented, which made use of series, a continued fraction method and nonoscillating integral representations. The range of validity of the algorithm was limited by the singularity of the steepest descent paths near the transition point. We show how uniform Airy-type asymptotic expansions fill the gap left by the steepest descent method.
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Van Leer, Bram; Roe, Philip L.
1991-01-01
A limiting method has been devised for a grid-independent flux function for use with the two-dimensional Euler and Navier-Stokes equations. This limiting is derived from a monotonicity analysis of the model and allows for solutions with reduced oscillatory behavior while still maintaining sharper resolution than a grid-aligned method. In addition to capturing oblique waves sharply, the grid-independent flux function also reduces the entropy generated over an airfoil in an Euler computation and reduces pressure distortions in the separated boundary layer of a viscous-flow airfoil computation. The model has also been extended to three dimensions, although no angle-limiting procedure for improving monotonicity characteristics has been incorporated.
The use of computer graphic techniques for the determination of ventricular function.
NASA Technical Reports Server (NTRS)
Sandler, H.; Rasmussen, D.
1972-01-01
Description of computer techniques employed to increase the speed, accuracy, reliability, and scope of angiocardiographic analyses determining human heart dimensions. Chamber margins are traced with a Calma 303 digitizer from projections of the angiographic films. The digitized margins of the ventricular images are filed in a computer for subsequent analysis. The margins can be displayed on the television screen of a graphics unit for individual study or they can be viewed in real time (or at any selected speed) to study dynamic changes in the chamber outline. The construction of three dimensional images of the ventricle is described.
Pierri, Ciro Leonardo; Parisi, Giovanni; Porcelli, Vito
2010-09-01
The functional characterization of proteins represents a daily challenge for biochemical, medical and computational sciences. Although finally proved on the bench, the function of a protein can be successfully predicted by computational approaches that drive the further experimental assays. Current methods for comparative modeling allow the construction of accurate 3D models for proteins of unknown structure, provided that a crystal structure of a homologous protein is available. Binding regions can be proposed by using binding site predictors, data inferred from homologous crystal structures, and data provided from a careful interpretation of the multiple sequence alignment of the investigated protein and its homologs. Once the location of a binding site has been proposed, chemical ligands that have a high likelihood of binding can be identified by using ligand docking and structure-based virtual screening of chemical libraries. Most docking algorithms allow building a list sorted by energy of the lowest energy docking configuration for each ligand of the library. In this review the state-of-the-art of computational approaches in 3D protein comparative modeling and in the study of protein-ligand interactions is provided. Furthermore a possible combined/concerted multistep strategy for protein function prediction, based on multiple sequence alignment, comparative modeling, binding region prediction, and structure-based virtual screening of chemical libraries, is described by using suitable examples. As practical examples, Abl-kinase molecular modeling studies, HPV-E6 protein multiple sequence alignment analysis, and some other model docking-based characterization reports are briefly described to highlight the importance of computational approaches in protein function prediction.
Shao, Nan; Sun, Xiao-Guang; Dai, Sheng; Jiang, Deen
2012-01-01
New electrolytes with large electrochemical windows are needed to meet the challenge for high-voltage Li-ion batteries. Sulfone as an electrolyte solvent boasts of high oxidation potentials. Here we examine the effect of multiple functionalization on sulfone's oxidation potential. We compute oxidation potentials for a series of sulfone-based molecules functionalized with fluorine, cyano, ester, and carbonate groups by using a quantum chemistry method within a continuum solvation model. We find that multifunctionalization is a key to achieving high oxidation potentials. This can be realized through either a fluorether group on a sulfone molecule or sulfonyl fluoride with a cyano or ester group.
Gonis, Antonios; Daene, Markus W; Nicholson, Don M; Stocks, George Malcolm
2012-01-01
We have developed and tested in terms of atomic calculations an exact, analytic and computationally simple procedure for determining the functional derivative of the exchange energy with respect to the density in the implementation of the Kohn Sham formulation of density functional theory (KS-DFT), providing an analytic, closed-form solution of the self-interaction problem in KS-DFT. We demonstrate the efficacy of our method through ground-state calculations of the exchange potential and energy for atomic He and Be atoms, and comparisons with experiment and the results obtained within the optimized effective potential (OEP) method.
Mackie, W.A.; Hinrichs, C.H.; Cohen, I.M.; Alin, J.S.; Schnitzler, D.T.; Carleson, P.; Ginn, R.; Krueger, P.; Vetter, C.G. ); Davis, P.R. )
1990-05-01
We report on a unique experimental method to determine thermionic work functions of major crystal planes of single crystal zirconium carbide. Applications for transition metal carbides could include cathodes for advanced thermionic energy conversion, radiation immune microcircuitry, {beta}-SiC substrates or high current density field emission cathodes. The primary emphasis of this paper is the analytical method used, that of computer processing a digitized image. ZrC single crystal specimens were prepared by floating zone arc refinement from sintered stock, yielding an average bulk stoichiometry of C/Zr=0.92. A 0.075 cm hemispherical cathode was prepared and mounted in a thermionic projection microscope (TPM) tube. The imaged patterns of thermally emitted electrons taken at various extraction voltages were digitized and computer analyzed to yield currents and corresponding emitting areas for major crystallographic planes. These data were taken at pyrometrically measured temperatures in the range 1700{lt}{ital T}{lt}2200 K. Schottky plots were then used to determine effective thermionic work functions as a function of crystallographic direction and temperature. Work function ordering for various crystal planes is reported through the TPM image processing method. Comparisons are made with effective thermionic and absolute (FERP) work function methods. To support the TPM image processing method, clean tungsten surfaces were examined and results are listed with accepted values.
Mahmud, Zabed; Malik, Syeda Umme Fahmida; Ahmed, Jahed
2016-01-01
Single-nucleotide polymorphisms (SNPs) associated with complex disorders can create, destroy, or modify protein coding sites. Single amino acid substitutions in the insulin receptor (INSR) are the most common forms of genetic variations that account for various diseases like Donohue syndrome or Leprechaunism, Rabson-Mendenhall syndrome, and type A insulin resistance. We analyzed the deleterious nonsynonymous SNPs (nsSNPs) in INSR gene based on different computational methods. Analysis of INSR was initiated with PROVEAN followed by PolyPhen and I-Mutant servers to investigate the effects of 57 nsSNPs retrieved from database of SNP (dbSNP). A total of 18 mutations that were found to exert damaging effects on the INSR protein structure and function were chosen for further analysis. Among these mutations, our computational analysis suggested that 13 nsSNPs decreased protein stability and might have resulted in loss of function. Therefore, the probability of their involvement in disease predisposition increases. In the lack of adequate prior reports on the possible deleterious effects of nsSNPs, we have systematically analyzed and characterized the functional variants in coding region that can alter the expression and function of INSR gene. In silico characterization of nsSNPs affecting INSR gene function can aid in better understanding of genetic differences in disease susceptibility. PMID:27840822
Arensman, F W; Radley-Smith, R; Grieve, L; Gibson, D G; Yacoub, M H
1986-01-01
Left ventricular function before and after anatomical correction of transposition of the great arteries was assessed by computer assisted analysis of 78 echocardiographs from 27 patients obtained one year before to five years after operation. Sixteen patients had simple transposition, and 11 had complex transposition with additional large ventricular septal defect. Immediately after correction mean shortening fraction fell from 46(9)% to 33(8)%. There was a corresponding drop in normalised peak shortening rate from 5.4(3.7) to 3.3(1.1) s-1 and normal septal motion was usually absent. Systolic shortening fraction increased with time after correction and left ventricular end diastolic diameter increased appropriately for age. The preoperative rate of free wall thickening was significantly higher in simple (5.6(2.8) s-1) and complex transposition (4.5(1.8) s-1) than in controls (2.9(0.8) s-1). After operation these values remained high in both the short and long term. Thus, computer assisted analysis of left ventricular dimensions and their rates of change before and after anatomical correction showed only slight postoperative changes which tended to become normal with time. Septal motion was commonly absent after operation. This was associated with an increase in the rate of posterior wall thickening that suggested normal ventricular function associated with an altered contraction pattern. Computer assisted echocardiographic analysis may be helpful in the long term assessment of ventricular function after operation for various heart abnormalities. PMID:3942650
sTeam--Providing Primary Media Functions for Web-Based Computer-Supported Cooperative Learning.
ERIC Educational Resources Information Center
Hampel, Thorsten
The World Wide Web has developed as the de facto standard for computer based learning. However, as a server-centered approach, it confines readers and learners to passive nonsequential reading. Authoring and Web-publishing systems aim at supporting the authors' design process. Consequently, learners' activities are confined to selecting and…
ERIC Educational Resources Information Center
Rabab'ah, Ghaleb
2013-01-01
This study explores the discourse generated by English as a foreign language (EFL) learners using synchronous computer-mediated communication (CMC) as an approach to help English language learners to create social interaction in the classroom. It investigates the impact of synchronous CMC mode on the quantity of total words, lexical range and…
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Perry, Justin; Shamir, Haya
2010-01-01
This study examines the effects on early reading skills of three different methods of presenting material with computer-assisted instruction (CAI): (1) learner-controlled picture menu, which allows the student to choose activities, (2) linear sequencer, which progresses the students through lessons at a pre-specified pace, and (3) mastery-based…
González-Díaz, Humberto; Agüero-Chapin, Guillermín; Varona, Javier; Molina, Reinaldo; Delogu, Giovanna; Santana, Lourdes; Uriarte, Eugenio; Podda, Gianni
2007-04-30
Methods for prediction of proteins, DNA, or RNA function and mapping it onto sequence often rely on bioinformatics alignment approach instead of chemical structure. Consequently, it is interesting to develop computational chemistry approaches based on molecular descriptors. In this sense, many researchers used sequence-coupling numbers and our group extended them to 2D proteins representations. However, no coupling numbers have been reported for 2D-RNA topology graphs, which are highly branched and contain useful information. Here, we use a computational chemistry scheme: (a) transforming sequences into RNA secondary structures, (b) defining and calculating new 2D-RNA-coupling numbers, (c) seek a structure-function model, and (d) map biological function onto the folded RNA. We studied as example 1-aminocyclopropane-1-carboxylic acid (ACC) oxidases known as ACO, which control fruit ripening having importance for biotechnology industry. First, we calculated tau(k)(2D-RNA) values to a set of 90-folded RNAs, including 28 transcripts of ACO and control sequences. Afterwards, we compared the classification performance of 10 different classifiers implemented in the software WEKA. In particular, the logistic equation ACO = 23.8 . tau(1)(2D-RNA) + 41.4 predicts ACOs with 98.9%, 98.0%, and 97.8% of accuracy in training, leave-one-out and 10-fold cross-validation, respectively. Afterwards, with this equation we predict ACO function to a sequence isolated in this work from Coffea arabica (GenBank accession DQ218452). The tau(1)(2D-RNA) also favorably compare with other descriptors. This equation allows us to map the codification of ACO activity on different mRNA topology features. The present computational-chemistry approach is general and could be extended to connect RNA secondary structure topology to other functions.
Chen, Minxin; Li, Xiantao; Liu, Chun
2014-08-14
We present a numerical method to approximate the memory functions in the generalized Langevin models for the collective dynamics of macromolecules. We first derive the exact expressions of the memory functions, obtained from projection to subspaces that correspond to the selection of coarse-grain variables. In particular, the memory functions are expressed in the forms of matrix functions, which will then be approximated by Krylov-subspace methods. It will also be demonstrated that the random noise can be approximated under the same framework, and the second fluctuation-dissipation theorem is automatically satisfied. The accuracy of the method is examined through several numerical examples.
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations.
Toublanc, D
1996-06-20
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
Henyey-Greenstein and Mie phase functions in Monte Carlo radiative transfer computations
NASA Astrophysics Data System (ADS)
Toublanc, Dominique
1996-06-01
Monte Carlo radiative transfer simulation of light scattering in planetary atmospheres is not a simple problem, especially the study of angular distribution of light intensity. Approximate phase functions such as Henyey-Greenstein, modified Henyey-Greenstein, or Legendre polynomial decomposition are often used to simulate the Mie phase function. An alternative solution using an exact calculation alleviates these approximations.
A Computation of the Frequency Dependent Dielectric Function for Energetic Materials
NASA Astrophysics Data System (ADS)
Zwitter, D. E.; Kuklja, M. M.; Kunz, A. B.
1999-06-01
The imaginary part of the dielectric function as a function of frequency is calculated for the solids RDX, TATB, ADN, and PETN. Calculations have been performed including the effects of isotropic and uniaxial pressure. Simple lattice defects are included in some of the calculations.
Substrate tunnels in enzymes: structure-function relationships and computational methodology.
Kingsley, Laura J; Lill, Markus A
2015-04-01
In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein, and, in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years because of implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with buried active sites and to provide a brief summary of the computational tools used to identify and evaluate these tunnels.
Substrate Tunnels in Enzymes: Structure-Function Relationships and Computational Methodology
Kingsley, Laura J.; Lill, Markus A.
2015-01-01
In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein and in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years due to implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with tunnels and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. PMID:25663659
Computational Methods for Structural and Functional Studies of Alzheimer's Amyloid Ion Channels.
Jang, Hyunbum; Arce, Fernando Teran; Lee, Joon; Gillman, Alan L; Ramachandran, Srinivasan; Kagan, Bruce L; Lal, Ratnesh; Nussinov, Ruth
2016-01-01
Aggregation can be studied by a range of methods, experimental and computational. Aggregates form in solution, across solid surfaces, and on and in the membrane, where they may assemble into unregulated leaking ion channels. Experimental probes of ion channel conformations and dynamics are challenging. Atomistic molecular dynamics (MD) simulations are capable of providing insight into structural details of amyloid ion channels in the membrane at a resolution not achievable experimentally. Since data suggest that late stage Alzheimer's disease involves formation of toxic ion channels, MD simulations have been used aiming to gain insight into the channel shapes, morphologies, pore dimensions, conformational heterogeneity, and activity. These can be exploited for drug discovery. Here we describe computational methods to model amyloid ion channels containing the β-sheet motif at atomic scale and to calculate toxic pore activity in the membrane.
Cosmic reionization on computers: The faint end of the galaxy luminosity function
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions atmore » $$z\\gtrsim 6$$. A commonly used Schechter function approximation with the magnitude cut at $${M}_{{\\rm{cut}}}\\sim -13$$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut $${M}_{{\\rm{cut}}}$$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.« less
Cosmic reionization on computers: The faint end of the galaxy luminosity function
Gnedin, Nickolay Y.
2016-07-01
Using numerical cosmological simulations completed under the “Cosmic Reionization On Computers” project, I explore theoretical predictions for the faint end of the galaxy UV luminosity functions at $z\\gtrsim 6$. A commonly used Schechter function approximation with the magnitude cut at ${M}_{{\\rm{cut}}}\\sim -13$ provides a reasonable fit to the actual luminosity function of simulated galaxies. When the Schechter functional form is forced on the luminosity functions from the simulations, the magnitude cut ${M}_{{\\rm{cut}}}$ is found to vary between -12 and -14 with a mild redshift dependence. Here, an analytical model of reionization from Madau et al., as used by Robertson et al., provides a good description of the simulated results, which can be improved even further by adding two physically motivated modifications to the original Madau et al. equation.