Sample records for functional integration method

  1. Boundary-integral methods in elasticity and plasticity. [solutions of boundary value problems

    NASA Technical Reports Server (NTRS)

    Mendelson, A.

    1973-01-01

    Recently developed methods that use boundary-integral equations applied to elastic and elastoplastic boundary value problems are reviewed. Direct, indirect, and semidirect methods using potential functions, stress functions, and displacement functions are described. Examples of the use of these methods for torsion problems, plane problems, and three-dimensional problems are given. It is concluded that the boundary-integral methods represent a powerful tool for the solution of elastic and elastoplastic problems.

  2. The general 2-D moments via integral transform method for acoustic radiation and scattering

    NASA Astrophysics Data System (ADS)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  3. An annular superposition integral for axisymmetric radiators.

    PubMed

    Kelly, James F; McGough, Robert J

    2007-02-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.

  4. Complex plane integration in the modelling of electromagnetic fields in layered media: part 1. Application to a very large loop

    NASA Astrophysics Data System (ADS)

    Silva, Valdelírio da Silva e.; Régis, Cícero; Howard, Allen Q., Jr.

    2014-02-01

    This paper analyses the details of a procedure for the numerical integration of Hankel transforms in the calculation of the electromagnetic fields generated by a large horizontal loop over a 1D earth. The method performs the integration by deforming the integration path into the complex plane and applying Cauchy's theorem on a modified version of the integrand. The modification is the replacement of the Bessel functions J0 and J1 by the Hankel functions H_0^{(1)} and H_1^{(1)} respectively. The integration in the complex plane takes advantage of the exponentially decaying behaviour of the Hankel functions, allowing calculation on very small segments, instead of the infinite line of the original improper integrals. A crucial point in this problem is the location of the poles. The companion paper shows two methods to estimate the pole locations. We have used this method to calculate the fields of very large loops. Our results show that this method allows the estimation of the integrals with fewer evaluations of the integrand functions than other methods.

  5. An annular superposition integral for axisymmetric radiators

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2007-01-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500

  6. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  7. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials.

    PubMed

    Mashayekhi, S; Razzaghi, M; Tripak, O

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  8. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials

    PubMed Central

    Mashayekhi, S.; Razzaghi, M.; Tripak, O.

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

  9. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  10. Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Bley, Gonzalo A.; Thomas, Lawrence E.

    2017-01-01

    We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.

  11. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    PubMed

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both local and global learning strategies, able to exploit the overall topology of the network. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    2018-03-20

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  13. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  14. Evaluating Feynman integrals by the hypergeometry

    NASA Astrophysics Data System (ADS)

    Feng, Tai-Fu; Chang, Chao-Hsi; Chen, Jian-Bin; Gu, Zhi-Hua; Zhang, Hai-Bin

    2018-02-01

    The hypergeometric function method naturally provides the analytic expressions of scalar integrals from concerned Feynman diagrams in some connected regions of independent kinematic variables, also presents the systems of homogeneous linear partial differential equations satisfied by the corresponding scalar integrals. Taking examples of the one-loop B0 and massless C0 functions, as well as the scalar integrals of two-loop vacuum and sunset diagrams, we verify our expressions coinciding with the well-known results of literatures. Based on the multiple hypergeometric functions of independent kinematic variables, the systems of homogeneous linear partial differential equations satisfied by the mentioned scalar integrals are established. Using the calculus of variations, one recognizes the system of linear partial differential equations as stationary conditions of a functional under some given restrictions, which is the cornerstone to perform the continuation of the scalar integrals to whole kinematic domains numerically with the finite element methods. In principle this method can be used to evaluate the scalar integrals of any Feynman diagrams.

  15. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  16. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  17. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  18. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  19. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  20. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  1. Ensemble gene function prediction database reveals genes important for complex I formation in Arabidopsis thaliana.

    PubMed

    Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek

    2018-03-01

    Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  2. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    NASA Astrophysics Data System (ADS)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  3. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  4. Extensive complementarity between gene function prediction methods.

    PubMed

    Vidulin, Vedrana; Šmuc, Tomislav; Supek, Fran

    2016-12-01

    The number of sequenced genomes rises steadily but we still lack the knowledge about the biological roles of many genes. Automated function prediction (AFP) is thus a necessity. We hypothesized that AFP approaches that draw on distinct genome features may be useful for predicting different types of gene functions, motivating a systematic analysis of the benefits gained by obtaining and integrating such predictions. Our pipeline amalgamates 5 133 543 genes from 2071 genomes in a single massive analysis that evaluates five established genomic AFP methodologies. While 1227 Gene Ontology (GO) terms yielded reliable predictions, the majority of these functions were accessible to only one or two of the methods. Moreover, different methods tend to assign a GO term to non-overlapping sets of genes. Thus, inferences made by diverse genomic AFP methods display a striking complementary, both gene-wise and function-wise. Because of this, a viable integration strategy is to rely on a single most-confident prediction per gene/function, rather than enforcing agreement across multiple AFP methods. Using an information-theoretic approach, we estimate that current databases contain 29.2 bits/gene of known Escherichia coli gene functions. This can be increased by up to 5.5 bits/gene using individual AFP methods or by 11 additional bits/gene upon integration, thereby providing a highly-ranking predictor on the Critical Assessment of Function Annotation 2 community benchmark. Availability of more sequenced genomes boosts the predictive accuracy of AFP approaches and also the benefit from integrating them. The individual and integrated GO predictions for the complete set of genes are available from http://gorbi.irb.hr/ CONTACT: fran.supek@irb.hrSupplementary information: Supplementary materials are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Calculation of Moment Matrix Elements for Bilinear Quadrilaterals and Higher-Order Basis Functions

    DTIC Science & Technology

    2016-01-06

    methods are known as boundary integral equation (BIE) methods and the present study falls into this category. The numerical solution of the BIE is...iterated integrals. The inner integral involves the product of the free-space Green’s function for the Helmholtz equation multiplied by an appropriate...Website: http://www.wipl-d.com/ 5. Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation -Based EM Problems in the Frequency Domain. New

  6. A Galerkin discretisation-based identification for parameters in nonlinear mechanical systems

    NASA Astrophysics Data System (ADS)

    Liu, Zuolin; Xu, Jian

    2018-04-01

    In the paper, a new parameter identification method is proposed for mechanical systems. Based on the idea of Galerkin finite-element method, the displacement over time history is approximated by piecewise linear functions, and the second-order terms in model equation are eliminated by integrating by parts. In this way, the lost function of integration form is derived. Being different with the existing methods, the lost function actually is a quadratic sum of integration over the whole time history. Then for linear or nonlinear systems, the optimisation of the lost function can be applied with traditional least-squares algorithm or the iterative one, respectively. Such method could be used to effectively identify parameters in linear and arbitrary nonlinear mechanical systems. Simulation results show that even under the condition of sparse data or low sampling frequency, this method could still guarantee high accuracy in identifying linear and nonlinear parameters.

  7. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  8. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care

    PubMed Central

    Valentijn, Pim P.; Schepman, Sanneke M.; Opheij, Wilfrid; Bruijnzeels, Marc A.

    2013-01-01

    Introduction Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. Methods The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. Results The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. Discussion The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective. PMID:23687482

  9. LETTER TO THE EDITOR: Two-centre exchange integrals for complex exponent Slater orbitals

    NASA Astrophysics Data System (ADS)

    Kuang, Jiyun; Lin, C. D.

    1996-12-01

    The one-dimensional integral representation for the Fourier transform of a two-centre product of B functions (finite linear combinations of Slater orbitals) with real parameters is generalized to include B functions with complex parameters. This one-dimensional integral representation allows for an efficient method of calculating two-centre exchange integrals with plane-wave electronic translational factors (ETF) over Slater orbitals of real/complex exponents. This method is a significant improvement on the previous two-dimensional quadrature method of the integrals. A new basis set of the form 0953-4075/29/24/005/img1 is proposed to improve the description of pseudo-continuum states in the close-coupling treatment of ion - atom collisions.

  10. An Alternative Method to the Classical Partial Fraction Decomposition

    ERIC Educational Resources Information Center

    Cherif, Chokri

    2007-01-01

    PreCalculus students can use the Completing the Square Method to solve quadratic equations without the need to memorize the quadratic formula since this method naturally leads them to that formula. Calculus students, when studying integration, use various standard methods to compute integrals depending on the type of function to be integrated.…

  11. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  12. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  13. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals.

    PubMed

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-21

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  14. Fast evaluation of solid harmonic Gaussian integrals for local resolution-of-the-identity methods and range-separated hybrid functionals

    NASA Astrophysics Data System (ADS)

    Golze, Dorothea; Benedikter, Niels; Iannuzzi, Marcella; Wilhelm, Jan; Hutter, Jürg

    2017-01-01

    An integral scheme for the efficient evaluation of two-center integrals over contracted solid harmonic Gaussian functions is presented. Integral expressions are derived for local operators that depend on the position vector of one of the two Gaussian centers. These expressions are then used to derive the formula for three-index overlap integrals where two of the three Gaussians are located at the same center. The efficient evaluation of the latter is essential for local resolution-of-the-identity techniques that employ an overlap metric. We compare the performance of our integral scheme to the widely used Cartesian Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials such as standard Coulomb, modified Coulomb, and Gaussian-type operators, which occur in range-separated hybrid functionals, are also included in the performance tests. The speed-up with respect to the OS scheme is up to three orders of magnitude for both integrals and their derivatives. In particular, our method is increasingly efficient for large angular momenta and highly contracted basis sets.

  15. Solving the Schroedinger Equation of Atoms and Molecules without Analytical Integration Based on the Free Iterative-Complement-Interaction Wave Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510

    2007-12-14

    A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less

  16. On computing closed forms for summations. [polynomials and rational functions

    NASA Technical Reports Server (NTRS)

    Moenck, R.

    1977-01-01

    The problem of finding closed forms for a summation involving polynomials and rational functions is considered. A method closely related to Hermite's method for integration of rational functions derived. The method expresses the sum of a rational function as a rational function part and a transcendental part involving derivatives of the gamma function.

  17. Method for determining the weight of functional objectives on manufacturing system.

    PubMed

    Zhang, Qingshan; Xu, Wei; Zhang, Jiekun

    2014-01-01

    We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method.

  18. The use of rational functions in numerical quadrature

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter

    2001-08-01

    Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.

  19. Paired Pulse Basis Functions for the Method of Moments EFIE Solution of Electromagnetic Problems Involving Arbitrarily-shaped, Three-dimensional Dielectric Scatterers

    NASA Technical Reports Server (NTRS)

    MacKenzie, Anne I.; Rao, Sadasiva M.; Baginski, Michael E.

    2007-01-01

    A pair of basis functions is presented for the surface integral, method of moment solution of scattering by arbitrarily-shaped, three-dimensional dielectric bodies. Equivalent surface currents are represented by orthogonal unit pulse vectors in conjunction with triangular patch modeling. The electric field integral equation is employed with closed geometries for dielectric bodies; the method may also be applied to conductors. Radar cross section results are shown for dielectric bodies having canonical spherical, cylindrical, and cubic shapes. Pulse basis function results are compared to results by other methods.

  20. An efficient numerical method for the solution of the problem of elasticity for 3D-homogeneous elastic medium with cracks and inclusions

    NASA Astrophysics Data System (ADS)

    Kanaun, S.; Markov, A.

    2017-06-01

    An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.

  1. Functional integration of automated system databases by means of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  2. Restoring a smooth function from its noisy integrals

    NASA Astrophysics Data System (ADS)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  3. Apparatus, method and system to control accessibility of platform resources based on an integrity level

    DOEpatents

    Jenkins, Chris; Pierson, Lyndon G.

    2016-10-25

    Techniques and mechanism to selectively provide resource access to a functional domain of a platform. In an embodiment, the platform includes both a report domain to monitor the functional domain and a policy domain to identify, based on such monitoring, a transition of the functional domain from a first integrity level to a second integrity level. In response to a change in integrity level, the policy domain may configure the enforcement domain to enforce against the functional domain one or more resource accessibility rules corresponding to the second integrity level. In another embodiment, the policy domain automatically initiates operations in aid of transitioning the platform from the second integrity level to a higher integrity level.

  4. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  5. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  6. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  7. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  8. Method to manage integration error in the Green-Kubo method.

    PubMed

    Oliveira, Laura de Sousa; Greaney, P Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  9. Method to manage integration error in the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Oliveira, Laura de Sousa; Greaney, P. Alex

    2017-02-01

    The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.

  10. Thermal form-factor approach to dynamical correlation functions of integrable lattice models

    NASA Astrophysics Data System (ADS)

    Göhmann, Frank; Karbach, Michael; Klümper, Andreas; Kozlowski, Karol K.; Suzuki, Junji

    2017-11-01

    We propose a method for calculating dynamical correlation functions at finite temperature in integrable lattice models of Yang-Baxter type. The method is based on an expansion of the correlation functions as a series over matrix elements of a time-dependent quantum transfer matrix rather than the Hamiltonian. In the infinite Trotter-number limit the matrix elements become time independent and turn into the thermal form factors studied previously in the context of static correlation functions. We make this explicit with the example of the XXZ model. We show how the form factors can be summed utilizing certain auxiliary functions solving finite sets of nonlinear integral equations. The case of the XX model is worked out in more detail leading to a novel form-factor series representation of the dynamical transverse two-point function.

  11. Method for Determining the Weight of Functional Objectives on Manufacturing System

    PubMed Central

    Zhang, Qingshan; Xu, Wei; Zhang, Jiekun

    2014-01-01

    We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method. PMID:25243203

  12. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  13. A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2007-09-01

    Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.

  14. Mathematical Methods for Physics and Engineering Third Edition Paperback Set

    NASA Astrophysics Data System (ADS)

    Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.

    2006-06-01

    Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.

  15. Efficient evaluation of the material response of tissues reinforced by statistically oriented fibres

    NASA Astrophysics Data System (ADS)

    Hashlamoun, Kotaybah; Grillo, Alfio; Federico, Salvatore

    2016-10-01

    For several classes of soft biological tissues, modelling complexity is in part due to the arrangement of the collagen fibres. In general, the arrangement of the fibres can be described by defining, at each point in the tissue, the structure tensor (i.e. the tensor product of the unit vector of the local fibre arrangement by itself) and a probability distribution of orientation. In this approach, assuming that the fibres do not interact with each other, the overall contribution of the collagen fibres to a given mechanical property of the tissue can be estimated by means of an averaging integral of the constitutive function describing the mechanical property at study over the set of all possible directions in space. Except for the particular case of fibre constitutive functions that are polynomial in the transversely isotropic invariants of the deformation, the averaging integral cannot be evaluated directly, in a single calculation because, in general, the integrand depends both on deformation and on fibre orientation in a non-separable way. The problem is thus, in a sense, analogous to that of solving the integral of a function of two variables, which cannot be split up into the product of two functions, each depending only on one of the variables. Although numerical schemes can be used to evaluate the integral at each deformation increment, this is computationally expensive. With the purpose of containing computational costs, this work proposes approximation methods that are based on the direct integrability of polynomial functions and that do not require the step-by-step evaluation of the averaging integrals. Three different methods are proposed: (a) a Taylor expansion of the fibre constitutive function in the transversely isotropic invariants of the deformation; (b) a Taylor expansion of the fibre constitutive function in the structure tensor; (c) for the case of a fibre constitutive function having a polynomial argument, an approximation in which the directional average of the constitutive function is replaced by the constitutive function evaluated at the directional average of the argument. Each of the proposed methods approximates the averaged constitutive function in such a way that it is multiplicatively decomposed into the product of a function of the deformation only and a function of the structure tensors only. In order to assess the accuracy of these methods, we evaluate the constitutive functions of the elastic potential and the Cauchy stress, for a biaxial test, under different conditions, i.e. different fibre distributions and different ratios of the nominal strains in the two directions. The results are then compared against those obtained for an averaging method available in the literature, as well as against the integration made at each increment of deformation.

  16. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  17. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  18. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  19. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  20. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care.

    PubMed

    Valentijn, Pim P; Schepman, Sanneke M; Opheij, Wilfrid; Bruijnzeels, Marc A

    2013-01-01

    Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective.

  1. A T Matrix Method Based upon Scalar Basis Functions

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Kahnert, F. M.; Mishchenko, Michael I.

    2013-01-01

    A surface integral formulation is developed for the T matrix of a homogenous and isotropic particle of arbitrary shape, which employs scalar basis functions represented by the translation matrix elements of the vector spherical wave functions. The formulation begins with the volume integral equation for scattering by the particle, which is transformed so that the vector and dyadic components in the equation are replaced with associated dipole and multipole level scalar harmonic wave functions. The approach leads to a volume integral formulation for the T matrix, which can be extended, by use of Green's identities, to the surface integral formulation. The result is shown to be equivalent to the traditional surface integral formulas based on the VSWF basis.

  2. [Methodological aspects related to the determination of the relative renal function using 99mTC MAG3].

    PubMed

    Ladrón De Guevara Hernández, D; Ham, H; Franken, P; Piepsz, A; Lobo Sotomayor, G

    2002-01-01

    The aim of the study was to evaluate three different methods for calculating the split renal function in patients with only one functioning kidney, keeping in mind that the split function should be zero on the side of the non-functioning kidney. We retrospectively selected 28 99mTc MAG3 renograms performed in children, 12 with unilateral nephrectomy, 4 with unilateral agenesis and 12 with a non-functioning kidney. A renal and perirenal region of interest (ROI) were delineated around the functioning kidney. The ROIs around the empty kidney were drawn symmetrically to the contralateral side. The split renal function was calculated using three different methods, the integral method, the slope method and the Patlak-Rutland algorithm. For the whole group of 28 kidneys as well as for the three categories of patients, the three methods provided a split function on the side of the non-functioning kidney close to the zero value, regardless of whether the empty kidney was the left or the right one. We recommend the use of the integral method for the whole range of split renal function with 99mTc MAG3. No significant improvement was obtained by means of the more sophisticated Patlak-Rutland method.

  3. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  4. Student Solution Manual for Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.

  5. Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.

  6. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2006-03-01

    Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.

  8. Density-functional expansion methods: evaluation of LDA, GGA, and meta-GGA functionals and different integral approximations.

    PubMed

    Giese, Timothy J; York, Darrin M

    2010-12-28

    We extend the Kohn-Sham potential energy expansion (VE) to include variations of the kinetic energy density and use the VE formulation with a 6-31G* basis to perform a "Jacob's ladder" comparison of small molecule properties using density functionals classified as being either LDA, GGA, or meta-GGA. We show that the VE reproduces standard Kohn-Sham DFT results well if all integrals are performed without further approximation, and there is no substantial improvement in using meta-GGA functionals relative to GGA functionals. The advantages of using GGA versus LDA functionals becomes apparent when modeling hydrogen bonds. We furthermore examine the effect of using integral approximations to compute the zeroth-order energy and first-order matrix elements, and the results suggest that the origin of the short-range repulsive potential within self-consistent charge density-functional tight-binding methods mainly arises from the approximations made to the first-order matrix elements.

  9. Optical Spatial integration methods for ambiguity function generation

    NASA Technical Reports Server (NTRS)

    Tamura, P. N.; Rebholz, J. J.; Daehlin, O. T.; Lee, T. C.

    1981-01-01

    A coherent optical spatial integration approach to ambiguity function generation is described. It uses one dimensional acousto-optic Bragg cells as input tranducers in conjunction with a space variant linear phase shifter, a passive optical element, to generate the two dimensional ambiguity function in one exposure. Results of a real time implementation of this system are shown.

  10. The description of two-photon Rabi oscillations in the path integral approach

    NASA Astrophysics Data System (ADS)

    Biryukov, A. A.; Degtyareva, Ya. V.; Shleenkov, M. A.

    2018-04-01

    The probability of quantum transitions of a molecule between its states under the action of an electromagnetic field is represented as an integral over trajectories from a real alternating functional. A method is proposed for computing the integral using recurrence relations. The method is attached to describe the two-photon Rabi oscillations.

  11. Gene context analysis in the Integrated Microbial Genomes (IMG) data management system.

    PubMed

    Mavromatis, Konstantinos; Chu, Ken; Ivanova, Natalia; Hooper, Sean D; Markowitz, Victor M; Kyrpides, Nikos C

    2009-11-24

    Computational methods for determining the function of genes in newly sequenced genomes have been traditionally based on sequence similarity to genes whose function has been identified experimentally. Function prediction methods can be extended using gene context analysis approaches such as examining the conservation of chromosomal gene clusters, gene fusion events and co-occurrence profiles across genomes. Context analysis is based on the observation that functionally related genes are often having similar gene context and relies on the identification of such events across phylogenetically diverse collection of genomes. We have used the data management system of the Integrated Microbial Genomes (IMG) as the framework to implement and explore the power of gene context analysis methods because it provides one of the largest available genome integrations. Visualization and search tools to facilitate gene context analysis have been developed and applied across all publicly available archaeal and bacterial genomes in IMG. These computations are now maintained as part of IMG's regular genome content update cycle. IMG is available at: http://img.jgi.doe.gov.

  12. Microfluidic structures and methods for integrating a functional component into a microfluidic device

    DOEpatents

    Simmons, Blake [San Francisco, CA; Domeier, Linda [Danville, CA; Woo, Noble [San Gabriet, CA; Shepodd, Timothy [Livermore, CA; Renzi, Ronald F [Tracy, CA

    2008-04-01

    Injection molding is used to form microfluidic devices with integrated functional components. One or more functional components are placed in a mold cavity which is then closed. Molten thermoplastic resin is injected into the mold and then cooled, thereby forming a solid substrate including the functional component(s). The solid substrate including the functional component(s) is then bonded to a second substrate which may include microchannels or other features.

  13. Methods for integrating a functional component into a microfluidic device

    DOEpatents

    Simmons, Blake; Domeier, Linda; Woo, Noble; Shepodd, Timothy; Renzi, Ronald F.

    2014-08-19

    Injection molding is used to form microfluidic devices with integrated functional components. One or more functional components are placed in a mold cavity, which is then closed. Molten thermoplastic resin is injected into the mold and then cooled, thereby forming a solid substrate including the functional component(s). The solid substrate including the functional component(s) is then bonded to a second substrate, which may include microchannels or other features.

  14. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  15. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  16. The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Sjoeberg, L.; Rapp, R. H.

    1978-01-01

    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.

  17. Integrated air revitalization system for Space Station

    NASA Technical Reports Server (NTRS)

    Boyda, R. B.; Miller, C. W.; Schwartz, M. R.

    1986-01-01

    Fifty-one distinct functions are encompassed by the Space Station's Environmental Control and Life Support System; one exception to this noninteractivity of functions is the regenerative air revitalization system that removes and reduces CO2 and generates O2. The integration of these interdependent functions, and of humidity control, into a single system furnishes opportunities for process simplification as well as for power, weight and volume requirement reductions by comparison with discrete subsystems. Attention is presently given to a system which quantifies these integration-related savings and identifies additional advantages that accrue to this integrating design method.

  18. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  19. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  20. Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (4).

    PubMed

    Murase, Kenya

    2016-01-01

    Partial differential equations are often used in the field of medical physics. In this (final) issue, the methods for solving the partial differential equations were introduced, which include separation of variables, integral transform (Fourier and Fourier-sine transforms), Green's function, and series expansion methods. Some examples were also introduced, in which the integral transform and Green's function methods were applied to solving Pennes' bioheat transfer equation and the Fourier series expansion method was applied to Navier-Stokes equation for analyzing the wall shear stress in blood vessels.Finally, the author hopes that this series will be helpful for people who engage in medical physics.

  1. One-pot preparation of unsaturated polyester nanocomposites containing functionalized graphene sheets via a novel solvent-exchange method

    USDA-ARS?s Scientific Manuscript database

    This paper reports a convenient one-pot method integrating a novel solvent-exchange method into in situ melt polycondensation to fabricate unsaturated polyester nanocomposites containing functionalized graphene sheets (FGS). A novel solvent-exchange method was first developed to prepare graphene oxi...

  2. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  3. Predictive regulatory models in Drosophila melanogaster by integrative inference of transcriptional networks

    PubMed Central

    Marbach, Daniel; Roy, Sushmita; Ay, Ferhat; Meyer, Patrick E.; Candeias, Rogerio; Kahveci, Tamer; Bristow, Christopher A.; Kellis, Manolis

    2012-01-01

    Gaining insights on gene regulation from large-scale functional data sets is a grand challenge in systems biology. In this article, we develop and apply methods for transcriptional regulatory network inference from diverse functional genomics data sets and demonstrate their value for gene function and gene expression prediction. We formulate the network inference problem in a machine-learning framework and use both supervised and unsupervised methods to predict regulatory edges by integrating transcription factor (TF) binding, evolutionarily conserved sequence motifs, gene expression, and chromatin modification data sets as input features. Applying these methods to Drosophila melanogaster, we predict ∼300,000 regulatory edges in a network of ∼600 TFs and 12,000 target genes. We validate our predictions using known regulatory interactions, gene functional annotations, tissue-specific expression, protein–protein interactions, and three-dimensional maps of chromosome conformation. We use the inferred network to identify putative functions for hundreds of previously uncharacterized genes, including many in nervous system development, which are independently confirmed based on their tissue-specific expression patterns. Last, we use the regulatory network to predict target gene expression levels as a function of TF expression, and find significantly higher predictive power for integrative networks than for motif or ChIP-based networks. Our work reveals the complementarity between physical evidence of regulatory interactions (TF binding, motif conservation) and functional evidence (coordinated expression or chromatin patterns) and demonstrates the power of data integration for network inference and studies of gene regulation at the systems level. PMID:22456606

  4. Towards a taxonomy for integrated care: a mixed-methods study

    PubMed Central

    Valentijn, Pim P.; Boesveld, Inge C.; van der Klauw, Denise M.; Ruwaard, Dirk; Struijs, Jeroen N.; Molema, Johanna J.W.; Bruijnzeels, Marc A.; Vrijhoef, Hubertus JM.

    2015-01-01

    Introduction Building integrated services in a primary care setting is considered an essential important strategy for establishing a high-quality and affordable health care system. The theoretical foundations of such integrated service models are described by the Rainbow Model of Integrated Care, which distinguishes six integration dimensions (clinical, professional, organisational, system, functional and normative integration). The aim of the present study is to refine the Rainbow Model of Integrated Care by developing a taxonomy that specifies the underlying key features of the six dimensions. Methods First, a literature review was conducted to identify features for achieving integrated service delivery. Second, a thematic analysis method was used to develop a taxonomy of key features organised into the dimensions of the Rainbow Model of Integrated Care. Finally, the appropriateness of the key features was tested in a Delphi study among Dutch experts. Results The taxonomy consists of 59 key features distributed across the six integration dimensions of the Rainbow Model of Integrated Care. Key features associated with the clinical, professional, organisational and normative dimensions were considered appropriate by the experts. Key features linked to the functional and system dimensions were considered less appropriate. Discussion This study contributes to the ongoing debate of defining the concept and typology of integrated care. This taxonomy provides a development agenda for establishing an accepted scientific framework of integrated care from an end-user, professional, managerial and policy perspective. PMID:25759607

  5. Time-dependent integral equations of neutron transport for calculating the kinetics of nuclear reactors by the Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.

    2016-12-15

    Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.

  6. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  7. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  8. Atomic Calculations with a One-Parameter, Single Integral Method.

    ERIC Educational Resources Information Center

    Baretty, Reinaldo; Garcia, Carmelo

    1989-01-01

    Presents an energy function E(p) containing a single integral and one variational parameter, alpha. Represents all two-electron integrals within the local density approximation as a single integral. Identifies this as a simple treatment for use in an introductory quantum mechanics course. (MVL)

  9. A sampling-based method for ranking protein structural models by integrating multiple scores and features.

    PubMed

    Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong

    2011-09-01

    One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.

  10. Aeroelastic modeling for the FIT (Functional Integration Technology) team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    As part of Langley Research Center's commitment to developing multidisciplinary integration methods to improve aerospace systems, the Functional Integration Technology (FIT) team was established to perform dynamics integration research using an existing aircraft configuration, the F/A-18. An essential part of this effort has been the development of a comprehensive simulation modeling capability that includes structural, control, and propulsion dynamics as well as steady and unsteady aerodynamics. The structural and unsteady aerodynamics contributions come from an aeroelastic mode. Some details of the aeroelastic modeling done for the Functional Integration Technology (FIT) team research are presented. Particular attention is given to work done in the area of correction factors to unsteady aerodynamics data.

  11. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement functions because they are the elements of the complete functions for the system under consideration. We extended this idea to solve the relativistic DE and applied it to the hydrogen and helium atoms, without observing any problems such as variational collapse. Thereafter, we obtained very accurate solutions of the SE for the ground and excited states of the Born-Oppenheimer (BO) and non-BO states of very small systems like He, H(2)(+), H(2), and their analogues. For larger systems, however, the overlap and Hamiltonian integrals over the complement functions are not always known mathematically (integration difficulty); therefore we formulated the local SE (LSE) method as an integral-free method. Without any integration, the LSE method gave fairly accurate energies and wave functions for small atoms and molecules. We also calculated continuous potential curves of the ground and excited states of small diatomic molecules by introducing the transferable local sampling method. Although the FC-LSE method is simple, the achievement of chemical accuracy in the absolute energy of larger systems remains time-consuming. The development of more efficient methods for the calculations of ordinary molecules would allow researchers to make these calculations more easily.

  12. Further distinctive investigations of the Sumudu transform

    NASA Astrophysics Data System (ADS)

    Belgacem, Fethi Bin Muhammad; Silambarasan, Rathinavel

    2017-01-01

    The Sumudu transform of time function f (t) is computed by making the transform variable u of Sumudu as factor of function f (t) and then integrated against exp(-t). Being a factor in the original function f (t), becomes f (ut) preserves units and dimension. This preservation property distinguishes Sumudu from other integral transforms. With obtained definition, the related complete set of properties were derived for the Sumudu transform. Framgment of Symbolic C++ program was given for Sumudu computation as series. Also procedure in Maple was given for Sumudu computation in closed form. The Method proposed herein not depends neither on any of homotopy methods such as HPM, HAM nor any of decomposition methods such as ADM.

  13. Exact nonstationary responses of rectangular thin plate on Pasternak foundation excited by stochastic moving loads

    NASA Astrophysics Data System (ADS)

    Chen, Guohai; Meng, Zeng; Yang, Dixiong

    2018-01-01

    This paper develops an efficient method termed as PE-PIM to address the exact nonstationary responses of pavement structure, which is modeled as a rectangular thin plate resting on bi-parametric Pasternak elastic foundation subjected to stochastic moving loads with constant acceleration. Firstly, analytical power spectral density (PSD) functions of random responses for thin plate are derived by integrating pseudo excitation method (PEM) with Duhamel's integral. Based on PEM, the new equivalent von Mises stress (NEVMS) is proposed, whose PSD function contains all cross-PSD functions between stress components. Then, the PE-PIM that combines the PEM with precise integration method (PIM) is presented to achieve efficiently stochastic responses of the plate by replacing Duhamel's integral with the PIM. Moreover, the semi-analytical Monte Carlo simulation is employed to verify the computational results of the developed PE-PIM. Finally, numerical examples demonstrate the high accuracy and efficiency of PE-PIM for nonstationary random vibration analysis. The effects of velocity and acceleration of moving load, boundary conditions of the plate and foundation stiffness on the deflection and NEVMS responses are scrutinized.

  14. Organization of functional interaction of corporate information systems

    NASA Astrophysics Data System (ADS)

    Safronov, V. V.; Barabanov, V. F.; Podvalniy, S. L.; Nuzhnyy, A. M.

    2018-03-01

    In this article the methods of specialized software systems integration are analyzed and the concept of seamless integration of production decisions is offered. In view of this concept developed structural and functional schemes of the specialized software are shown. The proposed schemes and models are improved for a machine-building enterprise.

  15. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  16. Theoretical Investigation of Thermo-Mechanical Behavior of Carbon Nanotube-Based Composites Using the Integral Transform Method

    NASA Technical Reports Server (NTRS)

    Pawloski, Janice S.

    2001-01-01

    This project uses the integral transform technique to model the problem of nanotube behavior as an axially symmetric system of shells. Assuming that the nanotube behavior can be described by the equations of elasticity, we seek a stress function x which satisfies the biharmonic equation: del(exp 4) chi = [partial deriv(r(exp 2)) + partial deriv(r) + partial deriv(z(exp 2))] chi = 0. The method of integral transformations is used to transform the differential equation. The symmetry with respect to the z-axis indicates that we only need to consider the sine transform of the stress function: X(bar)(r,zeta) = integral(from 0 to infinity) chi(r,z)sin(zeta,z) dz.

  17. A network-based, integrative study to identify core biological pathways that drive breast cancer clinical subtypes

    PubMed Central

    Dutta, B; Pusztai, L; Qi, Y; André, F; Lazar, V; Bianchini, G; Ueno, N; Agarwal, R; Wang, B; Shiang, C Y; Hortobagyi, G N; Mills, G B; Symmans, W F; Balázsi, G

    2012-01-01

    Background: The rapid collection of diverse genome-scale data raises the urgent need to integrate and utilise these resources for biological discovery or biomedical applications. For example, diverse transcriptomic and gene copy number variation data are currently collected for various cancers, but relatively few current methods are capable to utilise the emerging information. Methods: We developed and tested a data-integration method to identify gene networks that drive the biology of breast cancer clinical subtypes. The method simultaneously overlays gene expression and gene copy number data on protein–protein interaction, transcriptional-regulatory and signalling networks by identifying coincident genomic and transcriptional disturbances in local network neighborhoods. Results: We identified distinct driver-networks for each of the three common clinical breast cancer subtypes: oestrogen receptor (ER)+, human epidermal growth factor receptor 2 (HER2)+, and triple receptor-negative breast cancers (TNBC) from patient and cell line data sets. Driver-networks inferred from independent datasets were significantly reproducible. We also confirmed the functional relevance of a subset of randomly selected driver-network members for TNBC in gene knockdown experiments in vitro. We found that TNBC driver-network members genes have increased functional specificity to TNBC cell lines and higher functional sensitivity compared with genes selected by differential expression alone. Conclusion: Clinical subtype-specific driver-networks identified through data integration are reproducible and functionally important. PMID:22343619

  18. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  19. Cosmological perturbation theory using the FFTLog: formalism and connection to QFT loop integrals

    NASA Astrophysics Data System (ADS)

    Simonović, Marko; Baldauf, Tobias; Zaldarriaga, Matias; Carrasco, John Joseph; Kollmeier, Juna A.

    2018-04-01

    We present a new method for calculating loops in cosmological perturbation theory. This method is based on approximating a ΛCDM-like cosmology as a finite sum of complex power-law universes. The decomposition is naturally achieved using an FFTLog algorithm. For power-law cosmologies, all loop integrals are formally equivalent to loop integrals of massless quantum field theory. These integrals have analytic solutions in terms of generalized hypergeometric functions. We provide explicit formulae for the one-loop and the two-loop power spectrum and the one-loop bispectrum. A chief advantage of our approach is that the difficult part of the calculation is cosmology independent, need be done only once, and can be recycled for any relevant predictions. Evaluation of standard loop diagrams then boils down to a simple matrix multiplication. We demonstrate the promise of this method for applications to higher multiplicity/loop correlation functions.

  20. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    PubMed

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  1. Functional atlas of the awake rat brain: A neuroimaging study of rat brain specialization and integration.

    PubMed

    Ma, Zhiwei; Perez, Pablo; Ma, Zilu; Liu, Yikang; Hamilton, Christina; Liang, Zhifeng; Zhang, Nanyin

    2018-04-15

    Connectivity-based parcellation approaches present an innovative method to segregate the brain into functionally specialized regions. These approaches have significantly advanced our understanding of the human brain organization. However, parallel progress in animal research is sparse. Using resting-state fMRI data and a novel, data-driven parcellation method, we have obtained robust functional parcellations of the rat brain. These functional parcellations reveal the regional specialization of the rat brain, which exhibited high within-parcel homogeneity and high reproducibility across animals. Graph analysis of the whole-brain network constructed based on these functional parcels indicates that the rat brain has a topological organization similar to humans, characterized by both segregation and integration. Our study also provides compelling evidence that the cingulate cortex is a functional hub region conserved from rodents to humans. Together, this study has characterized the rat brain specialization and integration, and has significantly advanced our understanding of the rat brain organization. In addition, it is valuable for studies of comparative functional neuroanatomy in mammalian brains. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions

    NASA Astrophysics Data System (ADS)

    El-Kalla, I. L.; Al-Bugami, A. M.

    2010-11-01

    In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.

  3. An integrative approach to inferring biologically meaningful gene modules.

    PubMed

    Cho, Ji-Hoon; Wang, Kai; Galas, David J

    2011-07-26

    The ability to construct biologically meaningful gene networks and modules is critical for contemporary systems biology. Though recent studies have demonstrated the power of using gene modules to shed light on the functioning of complex biological systems, most modules in these networks have shown little association with meaningful biological function. We have devised a method which directly incorporates gene ontology (GO) annotation in construction of gene modules in order to gain better functional association. We have devised a method, Semantic Similarity-Integrated approach for Modularization (SSIM) that integrates various gene-gene pairwise similarity values, including information obtained from gene expression, protein-protein interactions and GO annotations, in the construction of modules using affinity propagation clustering. We demonstrated the performance of the proposed method using data from two complex biological responses: 1. the osmotic shock response in Saccharomyces cerevisiae, and 2. the prion-induced pathogenic mouse model. In comparison with two previously reported algorithms, modules identified by SSIM showed significantly stronger association with biological functions. The incorporation of semantic similarity based on GO annotation with gene expression and protein-protein interaction data can greatly enhance the functional relevance of inferred gene modules. In addition, the SSIM approach can also reveal the hierarchical structure of gene modules to gain a broader functional view of the biological system. Hence, the proposed method can facilitate comprehensive and in-depth analysis of high throughput experimental data at the gene network level.

  4. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  5. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  6. Green's function integral equation method for propagation of electromagnetic waves in an anisotropic dielectric-magnetic slab

    NASA Astrophysics Data System (ADS)

    Shu, Weixing; Lv, Xiaofang; Luo, Hailu; Wen, Shuangchun

    2010-08-01

    We extend the Green's function integral method to investigate the propagation of electromagnetic waves through an anisotropic dielectric-magnetic slab. From a microscopic perspective, we analyze the interaction of wave with the slab and derive the propagation characteristics by self-consistent analyses. Applying the results, we find an alternative explanation to the general mechanism for the photon tunneling. The results are confirmed by numerical simulations and disclose the underlying physics of wave propagation through slab. The method extended is applicable to other problems of propagation in dielectric-magnetic materials, including metamaterials.

  7. Application of electrical stimulation for functional tissue engineering in vitro and in vivo

    NASA Technical Reports Server (NTRS)

    Park, Hyoungshin (Inventor); Freed, Lisa (Inventor); Vunjak-Novakovic, Gordana (Inventor); Langer, Robert (Inventor); Radisic, Milica (Inventor)

    2013-01-01

    The present invention provides new methods for the in vitro preparation of bioartificial tissue equivalents and their enhanced integration after implantation in vivo. These methods include submitting a tissue construct to a biomimetic electrical stimulation during cultivation in vitro to improve its structural and functional properties, and/or in vivo, after implantation of the construct, to enhance its integration with host tissue and increase cell survival and functionality. The inventive methods are particularly useful for the production of bioartificial equivalents and/or the repair and replacement of native tissues that contain electrically excitable cells and are subject to electrical stimulation in vivo, such as, for example, cardiac muscle tissue, striated skeletal muscle tissue, smooth muscle tissue, bone, vasculature, and nerve tissue.

  8. Analytical expressions for the correlation function of a hard sphere dimer fluid

    NASA Astrophysics Data System (ADS)

    Kim, Soonho; Chang, Jaeeon; Kim, Hwayong

    A closed form expression is given for the correlation function of a hard sphere dimer fluid. A set of integral equations is obtained from Wertheim's multidensity Ornstein-Zernike integral equation theory with Percus-Yevick approximation. Applying the Laplace transformation method to the integral equations and then solving the resulting equations algebraically, the Laplace transforms of the individual correlation functions are obtained. By the inverse Laplace transformation, the radial distribution function (RDF) is obtained in closed form out to 3D (D is the segment diameter). The analytical expression for the RDF of the hard dimer should be useful in developing the perturbation theory of dimer fluids.

  9. Analytical expression for the correlation function of a hard sphere chain fluid

    NASA Astrophysics Data System (ADS)

    Chang, Jaeeon; Kim, Hwayong

    A closed form expression is given for the correlation function of flexible hard sphere chain fluid. A set of integral equations obtained from Wertheim's multidensity Ornstein-Zernike integral equation theory with the polymer Percus-Yevick ideal chain approximation is considered. Applying the Laplace transformation method to the integral equations and then solving the resulting equations algebraically, the Laplace transforms of individual correlation functions are obtained. By inverse Laplace transformation the inter- and intramolecular radial distribution functions (RDFs) are obtained in closed forms up to 3D(D is segment diameter). These analytical expressions for the RDFs would be useful in developing the perturbation theory of chain fluids.

  10. Extension of the KLI approximation toward the exact optimized effective potential.

    PubMed

    Iafrate, G J; Krieger, J B

    2013-03-07

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  11. Extension of the KLI approximation toward the exact optimized effective potential

    NASA Astrophysics Data System (ADS)

    Iafrate, G. J.; Krieger, J. B.

    2013-03-01

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  12. Numerical time-domain electromagnetics based on finite-difference and convolution

    NASA Astrophysics Data System (ADS)

    Lin, Yuanqu

    Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.

  13. Functionally-fitted energy-preserving integrators for Poisson systems

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Wu, Xinyuan

    2018-07-01

    In this paper, a new class of energy-preserving integrators is proposed and analysed for Poisson systems by using functionally-fitted technology. The integrators exactly preserve energy and have arbitrarily high order. It is shown that the proposed approach allows us to obtain the energy-preserving methods derived in [12] by Cohen and Hairer (2011) and in [1] by Brugnano et al. (2012) for Poisson systems. Furthermore, we study the sufficient conditions that ensure the existence of a unique solution and discuss the order of the new energy-preserving integrators.

  14. A Kinematically Consistent Two-Point Correlation Function

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.

  15. A new multi-domain method based on an analytical control surface for linear and second-order mean drift wave loads on floating bodies

    NASA Astrophysics Data System (ADS)

    Liang, Hui; Chen, Xiaobo

    2017-10-01

    A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.

  16. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  17. Scalable, Lightweight, Integrated and Quick-to-Assemble (SLIQ) Hyperdrives for Functional Circuit Dissection.

    PubMed

    Liang, Li; Oline, Stefan N; Kirk, Justin C; Schmitt, Lukas Ian; Komorowski, Robert W; Remondes, Miguel; Halassa, Michael M

    2017-01-01

    Independently adjustable multielectrode arrays are routinely used to interrogate neuronal circuit function, enabling chronic in vivo monitoring of neuronal ensembles in freely behaving animals at a single-cell, single spike resolution. Despite the importance of this approach, its widespread use is limited by highly specialized design and fabrication methods. To address this, we have developed a Scalable, Lightweight, Integrated and Quick-to-assemble multielectrode array platform. This platform additionally integrates optical fibers with independently adjustable electrodes to allow simultaneous single unit recordings and circuit-specific optogenetic targeting and/or manipulation. In current designs, the fully assembled platforms are scalable from 2 to 32 microdrives, and yet range 1-3 g, light enough for small animals. Here, we describe the design process starting from intent in computer-aided design, parameter testing through finite element analysis and experimental means, and implementation of various applications across mice and rats. Combined, our methods may expand the utility of multielectrode recordings and their continued integration with other tools enabling functional dissection of intact neural circuits.

  18. Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels.

    PubMed

    Zhang, Chenchu; Hu, Yanlei; Du, Wenqiang; Wu, Peichao; Rao, Shenglong; Cai, Ze; Lao, Zhaoxin; Xu, Bing; Ni, Jincheng; Li, Jiawen; Zhao, Gang; Wu, Dong; Chu, Jiaru; Sugioka, Koji

    2016-09-13

    Rapid integration of high-quality functional devices in microchannels is in highly demand for miniature lab-on-a-chip applications. This paper demonstrates the embellishment of existing microfluidic devices with integrated micropatterns via femtosecond laser MRAF-based holographic patterning (MHP) microfabrication, which proves two-photon polymerization (TPP) based on spatial light modulator (SLM) to be a rapid and powerful technology for chip functionalization. Optimized mixed region amplitude freedom (MRAF) algorithm has been used to generate high-quality shaped focus field. Base on the optimized parameters, a single-exposure approach is developed to fabricate 200 × 200 μm microstructure arrays in less than 240 ms. Moreover, microtraps, QR code and letters are integrated into a microdevice by the advanced method for particles capture and device identification. These results indicate that such a holographic laser embellishment of microfluidic devices is simple, flexible and easy to access, which has great potential in lab-on-a-chip applications of biological culture, chemical analyses and optofluidic devices.

  19. Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels

    NASA Astrophysics Data System (ADS)

    Zhang, Chenchu; Hu, Yanlei; Du, Wenqiang; Wu, Peichao; Rao, Shenglong; Cai, Ze; Lao, Zhaoxin; Xu, Bing; Ni, Jincheng; Li, Jiawen; Zhao, Gang; Wu, Dong; Chu, Jiaru; Sugioka, Koji

    2016-09-01

    Rapid integration of high-quality functional devices in microchannels is in highly demand for miniature lab-on-a-chip applications. This paper demonstrates the embellishment of existing microfluidic devices with integrated micropatterns via femtosecond laser MRAF-based holographic patterning (MHP) microfabrication, which proves two-photon polymerization (TPP) based on spatial light modulator (SLM) to be a rapid and powerful technology for chip functionalization. Optimized mixed region amplitude freedom (MRAF) algorithm has been used to generate high-quality shaped focus field. Base on the optimized parameters, a single-exposure approach is developed to fabricate 200 × 200 μm microstructure arrays in less than 240 ms. Moreover, microtraps, QR code and letters are integrated into a microdevice by the advanced method for particles capture and device identification. These results indicate that such a holographic laser embellishment of microfluidic devices is simple, flexible and easy to access, which has great potential in lab-on-a-chip applications of biological culture, chemical analyses and optofluidic devices.

  20. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  1. DYNAMIC PLANE-STRAIN SHEAR RUPTURE WITH A SLIP-WEAKENING FRICTION LAW CALCULATED BY A BOUNDARY INTEGRAL METHOD.

    USGS Publications Warehouse

    Andrews, D.J.

    1985-01-01

    A numerical boundary integral method, relating slip and traction on a plane in an elastic medium by convolution with a discretized Green function, can be linked to a slip-dependent friction law on the fault plane. Such a method is developed here in two-dimensional plane-strain geometry. Spontaneous plane-strain shear ruptures can make a transition from sub-Rayleigh to near-P propagation velocity. Results from the boundary integral method agree with earlier results from a finite difference method on the location of this transition in parameter space. The methods differ in their prediction of rupture velocity following the transition. The trailing edge of the cohesive zone propagates at the P-wave velocity after the transition in the boundary integral calculations. Refs.

  2. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  3. The Harmonic Oscillator with a Gaussian Perturbation: Evaluation of the Integrals and Example Applications

    ERIC Educational Resources Information Center

    Earl, Boyd L.

    2008-01-01

    A general result for the integrals of the Gaussian function over the harmonic oscillator wavefunctions is derived using generating functions. Using this result, an example problem of a harmonic oscillator with various Gaussian perturbations is explored in order to compare the results of precise numerical solution, the variational method, and…

  4. The Green's matrix and the boundary integral equations for analysis of time-harmonic dynamics of elastic helical springs.

    PubMed

    Sorokin, Sergey V

    2011-03-01

    Helical springs serve as vibration isolators in virtually any suspension system. Various exact and approximate methods may be employed to determine the eigenfrequencies of vibrations of these structural elements and their dynamic transfer functions. The method of boundary integral equations is a meaningful alternative to obtain exact solutions of problems of the time-harmonic dynamics of elastic springs in the framework of Bernoulli-Euler beam theory. In this paper, the derivations of the Green's matrix, of the Somigliana's identities, and of the boundary integral equations are presented. The vibrational power transmission in an infinitely long spring is analyzed by means of the Green's matrix. The eigenfrequencies and the dynamic transfer functions are found by solving the boundary integral equations. In the course of analysis, the essential features and advantages of the method of boundary integral equations are highlighted. The reported analytical results may be used to study the time-harmonic motion in any wave guide governed by a system of linear differential equations in a single spatial coordinate along its axis. © 2011 Acoustical Society of America

  5. Finite element area and line integral transforms for generalization of aperture function and geometry in Kirchhoff scalar diffraction theory

    NASA Astrophysics Data System (ADS)

    Kraus, Hal G.

    1993-02-01

    Two finite element-based methods for calculating Fresnel region and near-field region intensities resulting from diffraction of light by two-dimensional apertures are presented. The first is derived using the Kirchhoff area diffraction integral and the second is derived using a displaced vector potential to achieve a line integral transformation. The specific form of each of these formulations is presented for incident spherical waves and for Gaussian laser beams. The geometry of the two-dimensional diffracting aperture(s) is based on biquadratic isoparametric elements, which are used to define apertures of complex geometry. These elements are also used to build complex amplitude and phase functions across the aperture(s), which may be of continuous or discontinuous form. The finite element transform integrals are accurately and efficiently integrated numerically using Gaussian quadrature. The power of these methods is illustrated in several examples which include secondary obstructions, secondary spider supports, multiple mirror arrays, synthetic aperture arrays, apertures covered by screens, apodization, phase plates, and off-axis apertures. Typically, the finite element line integral transform results in significant gains in computational efficiency over the finite element Kirchhoff transform method, but is also subject to some loss in generality.

  6. An integrative approach to inferring biologically meaningful gene modules

    PubMed Central

    2011-01-01

    Background The ability to construct biologically meaningful gene networks and modules is critical for contemporary systems biology. Though recent studies have demonstrated the power of using gene modules to shed light on the functioning of complex biological systems, most modules in these networks have shown little association with meaningful biological function. We have devised a method which directly incorporates gene ontology (GO) annotation in construction of gene modules in order to gain better functional association. Results We have devised a method, Semantic Similarity-Integrated approach for Modularization (SSIM) that integrates various gene-gene pairwise similarity values, including information obtained from gene expression, protein-protein interactions and GO annotations, in the construction of modules using affinity propagation clustering. We demonstrated the performance of the proposed method using data from two complex biological responses: 1. the osmotic shock response in Saccharomyces cerevisiae, and 2. the prion-induced pathogenic mouse model. In comparison with two previously reported algorithms, modules identified by SSIM showed significantly stronger association with biological functions. Conclusions The incorporation of semantic similarity based on GO annotation with gene expression and protein-protein interaction data can greatly enhance the functional relevance of inferred gene modules. In addition, the SSIM approach can also reveal the hierarchical structure of gene modules to gain a broader functional view of the biological system. Hence, the proposed method can facilitate comprehensive and in-depth analysis of high throughput experimental data at the gene network level. PMID:21791051

  7. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    NASA Astrophysics Data System (ADS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  8. Evaluating the functional state of adult-born neurons in the adult dentate gyrus of the hippocampus: from birth to functional integration.

    PubMed

    Aguilar-Arredondo, Andrea; Arias, Clorinda; Zepeda, Angélica

    2015-01-01

    Hippocampal neurogenesis occurs in the adult brain in various species, including humans. A compelling question that arose when neurogenesis was accepted to occur in the adult dentate gyrus (DG) is whether new neurons become functionally relevant over time, which is key for interpreting their potential contributions to synaptic circuitry. The functional state of adult-born neurons has been evaluated using various methodological approaches, which have, in turn, yielded seemingly conflicting results regarding the timing of maturation and functional integration. Here, we review the contributions of different methodological approaches to addressing the maturation process of adult-born neurons and their functional state, discussing the contributions and limitations of each method. We aim to provide a framework for interpreting results based on the approaches currently used in neuroscience for evaluating functional integration. As shown by the experimental evidence, adult-born neurons are prone to respond from early stages, even when they are not yet fully integrated into circuits. The ongoing integration process for the newborn neurons is characterised by different features. However, they may contribute differently to the network depending on their maturation stage. When combined, the strategies used to date convey a comprehensive view of the functional development of newly born neurons while providing a framework for approaching the critical time at which new neurons become functionally integrated and influence brain function.

  9. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  10. Quadrature imposition of compatibility conditions in Chebyshev methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Streett, C. L.

    1990-01-01

    Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.

  11. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  12. Kirkwood-Buff integrals of finite systems: shape effects

    NASA Astrophysics Data System (ADS)

    Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.

    2018-06-01

    The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.

  13. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  14. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  15. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  16. A non-planar two-loop three-point function beyond multiple polylogarithms

    NASA Astrophysics Data System (ADS)

    von Manteuffel, Andreas; Tancredi, Lorenzo

    2017-06-01

    We consider the analytic calculation of a two-loop non-planar three-point function which contributes to the two-loop amplitudes for t\\overline{t} production and γγ production in gluon fusion through a massive top-quark loop. All subtopology integrals can be written in terms of multiple polylogarithms over an irrational alphabet and we employ a new method for the integration of the differential equations which does not rely on the rationalization of the latter. The top topology integrals, instead, in spite of the absence of a massive three-particle cut, cannot be evaluated in terms of multiple polylogarithms and require the introduction of integrals over complete elliptic integrals and polylogarithms. We provide one-fold integral representations for the solutions and continue them analytically to all relevant regions of the phase space in terms of real functions, extracting all imaginary parts explicitly. The numerical evaluation of our expressions becomes straightforward in this way.

  17. Developing rapid methods for analyzing upland riparian functions and values.

    PubMed

    Hruby, Thomas

    2009-06-01

    Regulators protecting riparian areas need to understand the integrity, health, beneficial uses, functions, and values of this resource. Up to now most methods providing information about riparian areas are based on analyzing condition or integrity. These methods, however, provide little information about functions and values. Different methods are needed that specifically address this aspect of riparian areas. In addition to information on functions and values, regulators have very specific needs that include: an analysis at the site scale, low cost, usability, and inclusion of policy interpretations. To meet these needs a rapid method has been developed that uses a multi-criteria decision matrix to categorize riparian areas in Washington State, USA. Indicators are used to identify the potential of the site to provide a function, the potential of the landscape to support the function, and the value the function provides to society. To meet legal needs fixed boundaries for assessment units are established based on geomorphology, the distance from "Ordinary High Water Mark" and different categories of land uses. Assessment units are first classified based on ecoregions, geomorphic characteristics, and land uses. This simplifies the data that need to be collected at a site, but it requires developing and calibrating a separate model for each "class." The approach to developing methods is adaptable to other locations as its basic structure is not dependent on local conditions.

  18. Prediction of enzymatic pathways by integrative pathway mapping

    PubMed Central

    Wichelecki, Daniel J; San Francisco, Brian; Zhao, Suwen; Rodionov, Dmitry A; Vetting, Matthew W; Al-Obaidi, Nawar F; Lin, Henry; O'Meara, Matthew J; Scott, David A; Morris, John H; Russel, Daniel; Almo, Steven C; Osterman, Andrei L

    2018-01-01

    The functions of most proteins are yet to be determined. The function of an enzyme is often defined by its interacting partners, including its substrate and product, and its role in larger metabolic networks. Here, we describe a computational method that predicts the functions of orphan enzymes by organizing them into a linear metabolic pathway. Given candidate enzyme and metabolite pathway members, this aim is achieved by finding those pathways that satisfy structural and network restraints implied by varied input information, including that from virtual screening, chemoinformatics, genomic context analysis, and ligand -binding experiments. We demonstrate this integrative pathway mapping method by predicting the L-gulonate catabolic pathway in Haemophilus influenzae Rd KW20. The prediction was subsequently validated experimentally by enzymology, crystallography, and metabolomics. Integrative pathway mapping by satisfaction of structural and network restraints is extensible to molecular networks in general and thus formally bridges the gap between structural biology and systems biology. PMID:29377793

  19. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  20. Integrative Analysis of “-Omics” Data Using Penalty Functions

    PubMed Central

    Zhao, Qing; Shi, Xingjie; Huang, Jian; Liu, Jin; Li, Yang; Ma, Shuangge

    2014-01-01

    In the analysis of omics data, integrative analysis provides an effective way of pooling information across multiple datasets or multiple correlated responses, and can be more effective than single-dataset (response) analysis. Multiple families of integrative analysis methods have been proposed in the literature. The current review focuses on the penalization methods. Special attention is paid to sparse meta-analysis methods that pool summary statistics across datasets, and integrative analysis methods that pool raw data across datasets. We discuss their formulation and rationale. Beyond “standard” penalized selection, we also review contrasted penalization and Laplacian penalization which accommodate finer data structures. The computational aspects, including computational algorithms and tuning parameter selection, are examined. This review concludes with possible limitations and extensions. PMID:25691921

  1. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  2. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  3. A new method for calculating differential distributions directly in Mellin space

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander

    2006-12-01

    We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.

  4. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  5. Neutron skyshine calculations with the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-10-01

    Recently developed line- and conical-beam response functions are used to calculate neutron skyshine doses for four idealized source geometries. These calculations, which can serve as benchmarks, are compared with MCNP calculations, and the excellent agreement indicates that the integral conical- and line-beam method is an effective alternative to more computationally expensive transport calculations.

  6. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  7. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Çelebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  8. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  9. TRANSVERSE MERCATOR MAP PROJECTION OF THE SPHEROID USING TRANSFORMATION OF THE ELLIPTIC INTEGRAL

    NASA Technical Reports Server (NTRS)

    Wallis, D. E.

    1994-01-01

    This program produces the Gauss-Kruger (constant meridional scale) Transverse Mercator Projection which is used to construct the U.S. Army's Universal Transverse Mercator (UTM) Grid System. The method is capable of mapping the entire northern hemisphere of the earth (and, by symmetry of the projection, the entire earth) accurately with respect to a single principal meridian, and is therefore mathematically insensitive to proximity either to the pole or the equator, or to the departure of the meridian from the central meridian. This program could be useful to any map-making agency. The program overcomes the limitations of the "series" method (Thomas, 1952) presently used to compute the UTM Grid, specifically its complicated derivation, non-convergence near the pole, lack of rigorous error analysis, and difficulty of obtaining increased accuracy. The method is based on the principle that the parametric colatitude of a point is the amplitude of the Elliptic Integral of the 2nd Kind, and this (irreducible) integral is the desired projection. Thus, a specification of the colatitude leads, most directly (and with strongest motivation) to a formulation in terms of amplitude. The most difficult problem to be solved was setting up the method so that the Elliptic Integral of the 2nd Kind could be used elsewhere than on the principal meridian. The point to be mapped is specified in conventional geographic coordinates (geodetic latitude and longitudinal departure from the principal meridian). Using the colatitude (complement of latitude) and the longitude (departure), the initial step is to map the point to the North Polar Stereographic Projection. The closed-form, analytic function that coincides with the North Polar Stereographic Projection of the spheroid along the principal meridian is put into a Newton-Raphson iteration that solves for the tangent of one half the parametric colatitude, generalized to the complex plane. Because the parametric colatitude is the amplitude of the (irreducible) Incomplete Elliptic Integral of the 2nd Kind, the value for the tangent of one half the amplitude of the Elliptic Integral of the 2nd Kind is now known. The elliptic integral may now be computed by any desired method, and the result will be the Gauss-Kruger Transverse Mercator Projection. This result is a consequence of the fact that these steps produce a computation of real distance along the image (in the plane) of the principal meridian, and an analytic continuation of the distance at points that don't lie on the principal meridian. The elliptic-integral method used by this program is one of the "transformations of the elliptic integral" (similar to Landen's Transformation), appearing in standard handbooks of mathematical functions. Only elementary transcendental functions are utilized. The program output is the conventional (as used by the mapping agencies) cartesian coordinates, in meters, of the Transverse Mercator projection. The origin is at the intersection of the principal meridian and the equator. This FORTRAN77 program was developed on an IBM PC series computer equipped with an Intel Math Coprocessor. Double precision complex arithmetic and transcendental functions are needed to support a projection accuracy of 1 mm. Because such functions are not usually part of the FORTRAN library, the needed functions have been explicitly programmed and included in the source code. The program was developed in 1989. TRANSVERSE MERCATOR MAP PROJECTION OF THE SPHEROID USING TRANSFORMATIONS OF THE ELLIPTIC INTEGRAL is a copyrighted work with all copyright vested in NASA.

  10. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  11. Towards a taxonomy for integrated care: a mixed-methods study.

    PubMed

    Valentijn, Pim P; Boesveld, Inge C; van der Klauw, Denise M; Ruwaard, Dirk; Struijs, Jeroen N; Molema, Johanna J W; Bruijnzeels, Marc A; Vrijhoef, Hubertus Jm

    2015-01-01

    Building integrated services in a primary care setting is considered an essential important strategy for establishing a high-quality and affordable health care system. The theoretical foundations of such integrated service models are described by the Rainbow Model of Integrated Care, which distinguishes six integration dimensions (clinical, professional, organisational, system, functional and normative integration). The aim of the present study is to refine the Rainbow Model of Integrated Care by developing a taxonomy that specifies the underlying key features of the six dimensions. First, a literature review was conducted to identify features for achieving integrated service delivery. Second, a thematic analysis method was used to develop a taxonomy of key features organised into the dimensions of the Rainbow Model of Integrated Care. Finally, the appropriateness of the key features was tested in a Delphi study among Dutch experts. The taxonomy consists of 59 key features distributed across the six integration dimensions of the Rainbow Model of Integrated Care. Key features associated with the clinical, professional, organisational and normative dimensions were considered appropriate by the experts. Key features linked to the functional and system dimensions were considered less appropriate. This study contributes to the ongoing debate of defining the concept and typology of integrated care. This taxonomy provides a development agenda for establishing an accepted scientific framework of integrated care from an end-user, professional, managerial and policy perspective.

  12. Complex quantum Hamilton-Jacobi equation with Bohmian trajectories: Application to the photodissociation dynamics of NOCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    2014-03-14

    The complex quantum Hamilton-Jacobi equation-Bohmian trajectories (CQHJE-BT) method is introduced as a synthetic trajectory method for integrating the complex quantum Hamilton-Jacobi equation for the complex action function by propagating an ensemble of real-valued correlated Bohmian trajectories. Substituting the wave function expressed in exponential form in terms of the complex action into the time-dependent Schrödinger equation yields the complex quantum Hamilton-Jacobi equation. We transform this equation into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation describing the rate of change in the complex action transported along Bohmian trajectories is simultaneouslymore » integrated with the guidance equation for Bohmian trajectories, and the time-dependent wave function is readily synthesized. The spatial derivatives of the complex action required for the integration scheme are obtained by solving one moving least squares matrix equation. In addition, the method is applied to the photodissociation of NOCl. The photodissociation dynamics of NOCl can be accurately described by propagating a small ensemble of trajectories. This study demonstrates that the CQHJE-BT method combines the considerable advantages of both the real and the complex quantum trajectory methods previously developed for wave packet dynamics.« less

  13. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  14. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  15. Cambered Jet-Flapped Airfoil Theory with Tables and Computer Programs for Application.

    DTIC Science & Technology

    1977-09-01

    influence function which is a parametric function of the jet-momentum coefficient. In general, the integrals involved must be evaluated by numerical methods. Tables of the necessary influence functions are given in the report.

  16. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  17. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media

    NASA Astrophysics Data System (ADS)

    Bruno, O. P.; Pérez-Arancibia, C.

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76, 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  18. Windowed Green function method for the Helmholtz equation in the presence of multiply layered media.

    PubMed

    Bruno, O P; Pérez-Arancibia, C

    2017-06-01

    This paper presents a new methodology for the solution of problems of two- and three-dimensional acoustic scattering (and, in particular, two-dimensional electromagnetic scattering) by obstacles and defects in the presence of an arbitrary number of penetrable layers. Relying on the use of certain slow-rise windowing functions, the proposed windowed Green function approach efficiently evaluates oscillatory integrals over unbounded domains, with high accuracy, without recourse to the highly expensive Sommerfeld integrals that have typically been used to account for the effect of underlying planar multilayer structures. The proposed methodology, whose theoretical basis was presented in the recent contribution (Bruno et al. 2016 SIAM J. Appl. Math. 76 , 1871-1898. (doi:10.1137/15M1033782)), is fast, accurate, flexible and easy to implement. Our numerical experiments demonstrate that the numerical errors resulting from the proposed approach decrease faster than any negative power of the window size. In a number of examples considered in this paper, the proposed method is up to thousands of times faster, for a given accuracy, than corresponding methods based on the use of Sommerfeld integrals.

  19. Positive semidefinite tensor factorizations of the two-electron integral matrix for low-scaling ab initio electronic structure.

    PubMed

    Hoy, Erik P; Mazziotti, David A

    2015-08-14

    Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.

  20. A radial basis function Galerkin method for inhomogeneous nonlocal diffusion

    DOE PAGES

    Lehoucq, Richard B.; Rowe, Stephen T.

    2016-02-01

    We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.

  1. Optimized holographic femtosecond laser patterning method towards rapid integration of high-quality functional devices in microchannels

    PubMed Central

    Zhang, Chenchu; Hu, Yanlei; Du, Wenqiang; Wu, Peichao; Rao, Shenglong; Cai, Ze; Lao, Zhaoxin; Xu, Bing; Ni, Jincheng; Li, Jiawen; Zhao, Gang; Wu, Dong; Chu, Jiaru; Sugioka, Koji

    2016-01-01

    Rapid integration of high-quality functional devices in microchannels is in highly demand for miniature lab-on-a-chip applications. This paper demonstrates the embellishment of existing microfluidic devices with integrated micropatterns via femtosecond laser MRAF-based holographic patterning (MHP) microfabrication, which proves two-photon polymerization (TPP) based on spatial light modulator (SLM) to be a rapid and powerful technology for chip functionalization. Optimized mixed region amplitude freedom (MRAF) algorithm has been used to generate high-quality shaped focus field. Base on the optimized parameters, a single-exposure approach is developed to fabricate 200 × 200 μm microstructure arrays in less than 240 ms. Moreover, microtraps, QR code and letters are integrated into a microdevice by the advanced method for particles capture and device identification. These results indicate that such a holographic laser embellishment of microfluidic devices is simple, flexible and easy to access, which has great potential in lab-on-a-chip applications of biological culture, chemical analyses and optofluidic devices. PMID:27619690

  2. Low-temperature bonded glass-membrane microfluidic device for in vitro organ-on-a-chip cell culture models

    NASA Astrophysics Data System (ADS)

    Pocock, Kyall J.; Gao, Xiaofang; Wang, Chenxi; Priest, Craig; Prestidge, Clive A.; Mawatari, Kazuma; Kitamori, Takehiko; Thierry, Benjamin

    2015-12-01

    The integration of microfluidics with living biological systems has paved the way to the exciting concept of "organson- a-chip", which aims at the development of advanced in vitro models that replicate the key features of human organs. Glass based devices have long been utilised in the field of microfluidics but the integration of alternative functional elements within multi-layered glass microdevices, such as polymeric membranes, remains a challenge. To this end, we have extended a previously reported approach for the low-temperature bonding of glass devices that enables the integration of a functional polycarbonate porous membrane. The process was initially developed and optimised on specialty low-temperature bonding equipment (μTAS2001, Bondtech, Japan) and subsequently adapted to more widely accessible hot embosser units (EVG520HE Hot Embosser, EVG, Austria). The key aspect of this method is the use of low temperatures compatible with polymeric membranes. Compared to borosilicate glass bonding (650 °C) and quartz/fused silica bonding (1050 °C) processes, this method maintains the integrity and functionality of the membrane (Tg 150 °C for polycarbonate). Leak tests performed showed no damage or loss of integrity of the membrane for up to 150 hours, indicating sufficient bond strength for long term cell culture. A feasibility study confirmed the growth of dense and functional monolayers of Caco-2 cells within 5 days.

  3. Low-temperature bonding process for the fabrication of hybrid glass-membrane organ-on-a-chip devices

    NASA Astrophysics Data System (ADS)

    Pocock, Kyall J.; Gao, Xiaofang; Wang, Chenxi; Priest, Craig; Prestidge, Clive A.; Mawatari, Kazuma; Kitamori, Takehiko; Thierry, Benjamin

    2016-10-01

    The integration of microfluidics with living biological systems has paved the way to the exciting concept of "organs-on-a-chip," which aims at the development of advanced in vitro models that replicate the key features of human organs. Glass-based devices have long been utilized in the field of microfluidics but the integration of alternative functional elements within multilayered glass microdevices, such as polymeric membranes, remains a challenge. To this end, we have extended a previously reported approach for the low-temperature bonding of glass devices that enables the integration of a functional polycarbonate porous membrane. The process was initially developed and optimized on specialty low-temperature bonding equipment (μTAS2001, Bondtech, Japan) and subsequently adapted to more widely accessible hot embosser units (EVG520HE Hot Embosser, EVG, Austria). The key aspect of this method is the use of low temperatures compatible with polymeric membranes. Compared to borosilicate glass bonding (650°C) and quartz/fused silica bonding (1050°C) processes, this method maintains the integrity and functionality of the membrane (Tg 150°C for polycarbonate). Leak tests performed showed no damage or loss of integrity of the membrane for up to 150 h, indicating sufficient bond strength for long-term cell culture. A feasibility study confirmed the growth of dense and functional monolayers of Caco-2 cells within 5 days.

  4. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; ...

    2017-03-07

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  5. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrationalmore » zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.« less

  6. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  7. Volterra integral equation-factorisation method and nucleus-nucleus elastic scattering

    NASA Astrophysics Data System (ADS)

    Laha, U.; Majumder, M.; Bhoi, J.

    2018-04-01

    An approximate solution for the nuclear Hulthén plus atomic Hulthén potentials is constructed by solving the associated Volterra integral equation by series substitution method. Within the framework of supersymmetry-inspired factorisation method, this solution is exploited to construct higher partial wave interactions. The merit of our approach is examined by computing elastic scattering phases of the α {-}α system by the judicious use of phase function method. Reasonable agreements in phase shifts are obtained with standard data.

  8. INTEGRATING INDUSTRIAL ARTS AND THE ELEMENTARY SCHOOL CURRICULUM, THE REASON AND METHOD OF E.I.A.

    ERIC Educational Resources Information Center

    BOILY, ROBERT R.

    DEVELOPED FOR THE ELEMENTARY SCHOOL TEACHER, THIS MANUAL DESCRIBES THE RATIONALE AND SOME OF THE METHODS OF AN INDUSTRIAL ARTS PROGRAM WHICH FUNCTIONS AS AN INTEGRAL PART OF THE REGULAR ELEMENTARY SCHOOL CURRICULUM. GUIDELINES FOR CLASSROOM USE OF SUCH STANDARD TOOLS AS THE HAMMER AND THE HANDDRILL ARE PRESENTED, AND SUGGESTIONS ARE OFFERED FOR…

  9. Identification of alterations associated with age in the clustering structure of functional brain networks.

    PubMed

    Guzman, Grover E C; Sato, Joao R; Vidal, Maciel C; Fujita, Andre

    2018-01-01

    Initial studies using resting-state functional magnetic resonance imaging on the trajectories of the brain network from childhood to adulthood found evidence of functional integration and segregation over time. The comprehension of how healthy individuals' functional integration and segregation occur is crucial to enhance our understanding of possible deviations that may lead to brain disorders. Recent approaches have focused on the framework wherein the functional brain network is organized into spatially distributed modules that have been associated with specific cognitive functions. Here, we tested the hypothesis that the clustering structure of brain networks evolves during development. To address this hypothesis, we defined a measure of how well a brain region is clustered (network fitness index), and developed a method to evaluate its association with age. Then, we applied this method to a functional magnetic resonance imaging data set composed of 397 males under 31 years of age collected as part of the Autism Brain Imaging Data Exchange Consortium. As results, we identified two brain regions for which the clustering change over time, namely, the left middle temporal gyrus and the left putamen. Since the network fitness index is associated with both integration and segregation, our finding suggests that the identified brain region plays a role in the development of brain systems.

  10. Ferroelectric Zinc Oxide Nanowire Embedded Flexible Sensor for Motion and Temperature Sensing.

    PubMed

    Shin, Sung-Ho; Park, Dae Hoon; Jung, Joo-Yun; Lee, Min Hyung; Nah, Junghyo

    2017-03-22

    We report a simple method to realize multifunctional flexible motion sensor using ferroelectric lithium-doped ZnO-PDMS. The ferroelectric layer enables piezoelectric dynamic sensing and provides additional motion information to more precisely discriminate different motions. The PEDOT:PSS-functionalized AgNWs, working as electrode layers for the piezoelectric sensing layer, resistively detect a change of both movement or temperature. Thus, through the optimal integration of both elements, the sensing limit, accuracy, and functionality can be further expanded. The method introduced here is a simple and effective route to realize a high-performance flexible motion sensor with integrated multifunctionalities.

  11. Oscillatory supersonic kernel function method for interfering surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.

  12. The uniform asymptotic swallowtail approximation - Practical methods for oscillating integrals with four coalescing saddle points

    NASA Technical Reports Server (NTRS)

    Connor, J. N. L.; Curtis, P. R.; Farrelly, D.

    1984-01-01

    Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.

  13. On a class of integrals of Legendre polynomials with complicated arguments--with applications in electrostatics and biomolecular modeling.

    PubMed

    Yu, Yi-Kuo

    2003-08-15

    The exact analytical result for a class of integrals involving (associated) Legendre polynomials of complicated argument is presented. The method employed can in principle be generalized to integrals involving other special functions. This class of integrals also proves useful in the electrostatic problems in which dielectric spheres are involved, which is of importance in modeling the dynamics of biological macromolecules. In fact, with this solution, a more robust foundation is laid for the Generalized Born method in modeling the dynamics of biomolecules. c2003 Elsevier B.V. All rights reserved.

  14. Wave functions of symmetry-protected topological phases from conformal field theories

    NASA Astrophysics Data System (ADS)

    Scaffidi, Thomas; Ringel, Zohar

    2016-03-01

    We propose a method for analyzing two-dimensional symmetry-protected topological (SPT) wave functions using a correspondence with conformal field theories (CFTs) and integrable lattice models. This method generalizes the CFT approach for the fractional quantum Hall effect wherein the wave-function amplitude is written as a many-operator correlator in the CFT. Adopting a bottom-up approach, we start from various known microscopic wave functions of SPTs with discrete symmetries and show how the CFT description emerges at large scale, thereby revealing a deep connection between group cocycles and critical, sometimes integrable, models. We show that the CFT describing the bulk wave function is often also the one describing the entanglement spectrum, but not always. Using a plasma analogy, we also prove the existence of hidden quasi-long-range order for a large class of SPTs. Finally, we show how response to symmetry fluxes is easily described in terms of the CFT.

  15. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  16. The simultaneous integration of many trajectories using nilpotent normal forms

    NASA Technical Reports Server (NTRS)

    Grayson, Matthew A.; Grossman, Robert

    1990-01-01

    Taylor's formula shows how to approximate a certain class of functions by polynomials. The approximations are arbitrarily good in some neighborhood whenever the function is analytic and they are easy to compute. The main goal is to give an efficient algorithm to approximate a neighborhood of the configuration space of a dynamical system by a nilpotent, explicitly integrable dynamical system. The major areas covered include: an approximating map; the generalized Baker-Campbell-Hausdorff formula; the Picard-Taylor method; the main theorem; simultaneous integration of trajectories; and examples.

  17. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  18. Using the Screened Coulomb Potential to Illustrate the Variational Method

    ERIC Educational Resources Information Center

    Zuniga, Jose; Bastida, Adolfo; Requena, Alberto

    2012-01-01

    The screened Coulomb potential, or Yukawa potential, is used to illustrate the application of the single and linear variational methods. The trial variational functions are expressed in terms of Slater-type functions, for which the integrals needed to carry out the variational calculations are easily evaluated in closed form. The variational…

  19. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  20. Integral Equations in Computational Electromagnetics: Formulations, Properties and Isogeometric Analysis

    NASA Astrophysics Data System (ADS)

    Lovell, Amy Elizabeth

    Computational electromagnetics (CEM) provides numerical methods to simulate electromagnetic waves interacting with its environment. Boundary integral equation (BIE) based methods, that solve the Maxwell's equations in the homogeneous or piecewise homogeneous medium, are both efficient and accurate, especially for scattering and radiation problems. Development and analysis electromagnetic BIEs has been a very active topic in CEM research. Indeed, there are still many open problems that need to be addressed or further studied. A short and important list includes (1) closed-form or quasi-analytical solutions to time-domain integral equations, (2) catastrophic cancellations at low frequencies, (3) ill-conditioning due to high mesh density, multi-scale discretization, and growing electrical size, and (4) lack of flexibility due to re-meshing when increasing number of forward numerical simulations are involved in the electromagnetic design process. This dissertation will address those several aspects of boundary integral equations in computational electromagnetics. The first contribution of the dissertation is to construct quasi-analytical solutions to time-dependent boundary integral equations using a direct approach. Direct inverse Fourier transform of the time-harmonic solutions is not stable due to the non-existence of the inverse Fourier transform of spherical Hankel functions. Using new addition theorems for the time-domain Green's function and dyadic Green's functions, time-domain integral equations governing transient scattering problems of spherical objects are solved directly and stably for the first time. Additional, the direct time-dependent solutions, together with the newly proposed time-domain dyadic Green's functions, can enrich the time-domain spherical multipole theory. The second contribution is to create a novel method of moments (MoM) framework to solve electromagnetic boundary integral equation on subdivision surfaces. The aim is to avoid the meshing and re-meshing stages to accelerate the design process when the geometry needs to be updated. Two schemes to construct basis functions on the subdivision surface have been explored. One is to use the div-conforming basis function, and the other one is to create a rigorous iso-geometric approach based on the subdivision basis function with better smoothness properties. This new framework provides us better accuracy, more stability and high flexibility. The third contribution is a new stable integral equation formulation to avoid catastrophic cancellations due to low-frequency breakdown or dense-mesh breakdown. Many of the conventional integral equations and their associated post-processing operations suffer from numerical catastrophic cancellations, which can lead to ill-conditioning of the linear systems or serious accuracy problems. Examples includes low-frequency breakdown and dense mesh breakdown. Another instability may come from nontrivial null spaces of involving integral operators that might be related with spurious resonance or topology breakdown. This dissertation presents several sets of new boundary integral equations and studies their analytical properties. The first proposed formulation leads to the scalar boundary integral equations where only scalar unknowns are involved. Besides the requirements of gaining more stability and better conditioning in the resulting linear systems, multi-physics simulation is another driving force for new formulations. Scalar and vector potentials (rather than electromagnetic field) based formulation have been studied for this purpose. Those new contributions focus on different stages of boundary integral equations in an almost independent manner, e.g. isogeometric analysis framework can be used to solve different boundary integral equations, and the time-dependent solutions to integral equations from different formulations can be achieved through the same methodology proposed.

  1. Two-photon excitation cross-section in light and intermediate atoms

    NASA Technical Reports Server (NTRS)

    Omidvar, K.

    1980-01-01

    The method of explicit summation over the intermediate states is used along with LS coupling to derive an expression for two-photon absorption cross section in light and intermediate atoms in terms of integrals over radial wave functions. Two selection rules, one exact and one approximate, are also derived. In evaluating the radial integrals, for low-lying levels, the Hartree-Fock wave functions, and for high-lying levels, hydrogenic wave functions obtained by the quantum defect method are used. A relationship between the cross section and the oscillator strengths is derived. Cross sections due to selected transitions in nitrogen, oxygen, and chlorine are given. The expression for the cross section is useful in calculating the two-photon absorption in light and intermediate atoms.

  2. Two-photon excitation cross section in light and intermediate atoms in frozen-core LS-coupling approximation

    NASA Technical Reports Server (NTRS)

    Omidvar, K.

    1980-01-01

    Using the method of explicit summation over the intermediate states two-photon absorption cross sections in light and intermediate atoms based on the simplistic frozen-core approximation and LS coupling have been formulated. Formulas for the cross section in terms of integrals over radial wave functions are given. Two selection rules, one exact and one approximate, valid within the stated approximations are derived. The formulas are applied to two-photon absorptions in nitrogen, oxygen, and chlorine. In evaluating the radial integrals, for low-lying levels, the Hartree-Fock wave functions, and for high-lying levels, hydrogenic wave functions obtained by the quantum-defect method have been used. A relationship between the cross section and the oscillator strengths is derived.

  3. Influence of different cusp coverage methods for the extension of ceramic inlays on marginal integrity and enamel crack formation in vitro.

    PubMed

    Krifka, Stephanie; Stangl, Martin; Wiesbauer, Sarah; Hiller, Karl-Anton; Schmalz, Gottfried; Federlin, Marianne

    2009-09-01

    No information is available to date about cusp design of thin (1.0 mm) non-functional cusps and its influence upon (1) marginal integrity of ceramic inlays (CI) and partial ceramic crowns (PCC) and (2) crack formation of dental tissues. The aim of this in vitro study was to investigate the effect of cusp coverage of thin non-functional cusps on marginal integrity and enamel crack formation. CI and PCC preparations were performed on extracted human molars. Non-functional cusps were adjusted to 1.0-mm wall thickness and 1.0-mm wall thickness with horizontal reduction of about 2.0 mm. Ceramic restorations (Vita Mark II, Cerec3 System) were adhesively luted with Excite/Variolink II. The specimens were exposed to thermocycling and central mechanical loading. Marginal integrity was assessed by evaluating dye penetration after thermal cycling and mechanical loading. Enamel cracks were documented under a reflective-light microscope. The data were statistically analysed with the Mann-Whitney U test, the Fishers exact test (alpha = 0.05) and the error rates method. PCC with horizontal reduction of non-functional cusps showed statistically significant less microleakage than PCC without such a cusp coverage. Preparation designs with horizontal reduction of non-functional cusps showed a tendency to less enamel crack formation than preparation designs without cusp coverage. Thin non-functional cusp walls of adhesively bonded restorations should be completely covered or reduced to avoid enamel cracks and marginal deficiency.

  4. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  5. Gauge Momenta as Casimir Functions of Nonholonomic Systems

    NASA Astrophysics Data System (ADS)

    García-Naranjo, Luis C.; Montaldi, James

    2018-05-01

    We consider nonholonomic systems with symmetry possessing a certain type of first integral which is linear in the velocities. We develop a systematic method for modifying the standard nonholonomic almost Poisson structure that describes the dynamics so that these integrals become Casimir functions after reduction. This explains a number of recent results on Hamiltonization of nonholonomic systems, and has consequences for the study of relative equilibria in such systems.

  6. Analysis and evaluation of functional status of lower extremity amputee-appliance systems: an integrated approach.

    PubMed

    Ganguli, S

    1976-11-01

    This paper introduces an integrated, objective and biomechanically sound approach for the analysis and evaluation of the functional status of lower extremity amputee-appliance systems. The method is demonstrated here in its application to the unilateral lower extremity amputee-axillary crutches system and the unilateral below-knee amputee-PTB prosthesis system, both of which are commonly encountered in day-to-day rehabilitation practice.

  7. Fiber-reinforced dielectric elastomer laminates with integrated function of actuating and sensing

    NASA Astrophysics Data System (ADS)

    Li, Tiefeng; Xie, Yuhan; Li, Chi; Yang, Xuxu; Jin, Yongbin; Liu, Junjie; Huang, Xiaoqiang

    2015-04-01

    The natural limbs of animals and insects integrate muscles, skins and neurons, providing both the actuating and sensing functions simultaneously. Inspired by the natural structure, we present a novel structure with integrated function of actuating and sensing with dielectric elastomer (DE) laminates. The structure can deform when subjected to high voltage loading and generate corresponding output signal in return. We investigate the basic physical phenomenon of dielectric elastomer experimentally. It is noted that when applying high voltage, the actuating dielectric elastomer membrane deforms and the sensing dielectric elastomer membrane changes the capacitance in return. Based on the concept, finite element method (FEM) simulation has been conducted to further investigate the electromechanical behavior of the structure.

  8. A spectral boundary integral equation method for the 2-D Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.

  9. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  10. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  11. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  12. A Guided Tour of Mathematical Methods

    NASA Astrophysics Data System (ADS)

    Snieder, Roel

    2009-04-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical co-ordinates; 5. The gradient; 6. The divergence of a vector field; 7. The curl of a vector field; 8. The theorem of Gauss; 9. The theorem of Stokes; 10. The Laplacian; 11. Conservation laws; 12. Scale analysis; 13. Linear algebra; 14. The Dirac delta function; 15. Fourier analysis; 16. Analytic functions; 17. Complex integration; 18. Green's functions: principles; 19. Green's functions: examples; 20. Normal modes; 21. Potential theory; 22. Cartesian tensors; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Variational calculus; 26. Epilogue, on power and knowledge; References.

  13. Modeling of Graphene Planar Grating in the THz Range by the Method of Singular Integral Equations

    NASA Astrophysics Data System (ADS)

    Kaliberda, Mstislav E.; Lytvynenko, Leonid M.; Pogarsky, Sergey A.

    2018-04-01

    Diffraction of the H-polarized electromagnetic wave by the planar graphene grating in the THz range is considered. The scattering and absorption characteristics are studied. The scattered field is represented in the spectral domain via unknown spectral function. The mathematical model is based on the graphene surface impedance and the method of singular integral equations. The numerical solution is obtained by the Nystrom-type method of discrete singularities.

  14. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  15. Closed-form integrator for the quaternion (euler angle) kinematics equations

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor)

    2000-01-01

    The invention is embodied in a method of integrating kinematics equations for updating a set of vehicle attitude angles of a vehicle using 3-dimensional angular velocities of the vehicle, which includes computing an integrating factor matrix from quantities corresponding to the 3-dimensional angular velocities, computing a total integrated angular rate from the quantities corresponding to a 3-dimensional angular velocities, computing a state transition matrix as a sum of (a) a first complementary function of the total integrated angular rate and (b) the integrating factor matrix multiplied by a second complementary function of the total integrated angular rate, and updating the set of vehicle attitude angles using the state transition matrix. Preferably, the method further includes computing a quanternion vector from the quantities corresponding to the 3-dimensional angular velocities, in which case the updating of the set of vehicle attitude angles using the state transition matrix is carried out by (a) updating the quanternion vector by multiplying the quanternion vector by the state transition matrix to produce an updated quanternion vector and (b) computing an updated set of vehicle attitude angles from the updated quanternion vector. The first and second trigonometric functions are complementary, such as a sine and a cosine. The quantities corresponding to the 3-dimensional angular velocities include respective averages of the 3-dimensional angular velocities over plural time frames. The updating of the quanternion vector preserves the norm of the vector, whereby the updated set of vehicle attitude angles are virtually error-free.

  16. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  17. Boundary-Layer Receptivity and Integrated Transition Prediction

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Choudhari, Meelan

    2005-01-01

    The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.

  18. Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2007-08-01

    Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.

  19. An integrated native mass spectrometry and top-down proteomics method that connects sequence to structure and function of macromolecular complexes

    NASA Astrophysics Data System (ADS)

    Li, Huilin; Nguyen, Hong Hanh; Ogorzalek Loo, Rachel R.; Campuzano, Iain D. G.; Loo, Joseph A.

    2018-02-01

    Mass spectrometry (MS) has become a crucial technique for the analysis of protein complexes. Native MS has traditionally examined protein subunit arrangements, while proteomics MS has focused on sequence identification. These two techniques are usually performed separately without taking advantage of the synergies between them. Here we describe the development of an integrated native MS and top-down proteomics method using Fourier-transform ion cyclotron resonance (FTICR) to analyse macromolecular protein complexes in a single experiment. We address previous concerns of employing FTICR MS to measure large macromolecular complexes by demonstrating the detection of complexes up to 1.8 MDa, and we demonstrate the efficacy of this technique for direct acquirement of sequence to higher-order structural information with several large complexes. We then summarize the unique functionalities of different activation/dissociation techniques. The platform expands the ability of MS to integrate proteomics and structural biology to provide insights into protein structure, function and regulation.

  20. Wave field synthesis of a virtual source located in proximity to a loudspeaker array.

    PubMed

    Lee, Jung-Min; Choi, Jung-Woo; Kim, Yang-Hann

    2013-09-01

    For the derivation of 2.5-dimensional operator in wave field synthesis, a virtual source is assumed to be positioned far from a loudspeaker array. However, such far-field approximation inevitably results in a reproduction error when the virtual source is placed adjacent to an array. In this paper, a method is proposed to generate a virtual source close to and behind a continuous line array of loudspeakers. A driving function is derived by reducing a surface integral (Rayleigh integral) to a line integral based on the near-field assumption. The solution is then combined with the far-field formula of wave field synthesis by introducing a weighting function that can adjust the near- and far-field contribution of each driving function. This enables production of a virtual source anywhere in relation to the array. Simulations show the proposed method can reduce the reproduction error to below -18 dB, regardless of the virtual source position.

  1. Efficient Approaches for Evaluating the Planar Microstrip Green's Function and its Applications to the Analysis of Microstrip Antennas.

    NASA Astrophysics Data System (ADS)

    Barkeshli, Sina

    A relatively simple and efficient closed form asymptotic representation of the microstrip dyadic surface Green's function is developed. The large parameter in this asymptotic development is proportional to the lateral separation between the source and field points along the planar microstrip configuration. Surprisingly, this asymptotic solution remains accurate even for very small (almost two tenths of a wavelength) lateral separation of the source and field points. The present asymptotic Green's function will thus allow a very efficient calculation of the currents excited on microstrip antenna patches/feed lines and monolithic millimeter and microwave integrated circuit (MIMIC) elements based on a moment method (MM) solution of an integral equation for these currents. The kernal of the latter integral equation is the present asymptotic form of the microstrip Green's function. It is noted that the conventional Sommerfeld integral representation of the microstrip surface Green's function is very poorly convergent when used in this MM formulation. In addition, an efficient exact steepest descent path integral form employing a radially propagating representation of the microstrip dyadic Green's function is also derived which exhibits a relatively faster convergence when compared to the conventional Sommerfeld integral representation. The same steepest descent form could also be obtained by deforming the integration contour of the conventional Sommerfeld representation; however, the radially propagating integral representation exhibits better convergence properties for laterally separated source and field points even before the steepest descent path of integration is used. Numerical results based on the efficient closed form asymptotic solution for the microstrip surface Green's function developed in this work are presented for the mutual coupling between a pair of dipoles on a single layer grounded dielectric slab. The accuracy of the latter calculations is confirmed by comparison with results based on an exact integral representation for that Green's function.

  2. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  3. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  4. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations That Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + definite integral of g(x, t)F(t,y(t))dt with limits between 0 and 1,0 less than or equal to x les than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integral equations arise, e.g., when one applied Green's function techniques to nonlinear two-point boundary value problems of the form y "(x) =f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and y(l) = y(sub l), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trepezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal rule, thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  5. Improving the Accuracy of Quadrature Method Solutions of Fredholm Integral Equations that Arise from Nonlinear Two-Point Boundary Value Problems

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Pennline, James A.

    1999-01-01

    In this paper we are concerned with high-accuracy quadrature method solutions of nonlinear Fredholm integral equations of the form y(x) = r(x) + integral(0 to 1) g(x,t) F(t, y(t)) dt, 0 less than or equal to x less than or equal to 1, where the kernel function g(x,t) is continuous, but its partial derivatives have finite jump discontinuities across x = t. Such integrals equations arise, e.g., when one applies Green's function techniques to nonlinear two-point boundary value problems of the form U''(x) = f(x,y(x)), 0 less than or equal to x less than or equal to 1, with y(0) = y(sub 0) and g(l) = y(sub 1), or other linear boundary conditions. A quadrature method that is especially suitable and that has been employed for such equations is one based on the trapezoidal rule that has a low accuracy. By analyzing the corresponding Euler-Maclaurin expansion, we derive suitable correction terms that we add to the trapezoidal thus obtaining new numerical quadrature formulas of arbitrarily high accuracy that we also use in defining quadrature methods for the integral equations above. We prove an existence and uniqueness theorem for the quadrature method solutions, and show that their accuracy is the same as that of the underlying quadrature formula. The solution of the nonlinear systems resulting from the quadrature methods is achieved through successive approximations whose convergence is also proved. The results are demonstrated with numerical examples.

  6. Methods to Register Models and Input/Output Parameters for Integrated Modeling

    EPA Science Inventory

    Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s kno...

  7. Text Mining Improves Prediction of Protein Functional Sites

    PubMed Central

    Cohn, Judith D.; Ravikumar, Komandur E.

    2012-01-01

    We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites). The structure analysis was carried out using Dynamics Perturbation Analysis (DPA), which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites) in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions. PMID:22393388

  8. An Integrated Approach for Gear Health Prognostics

    NASA Technical Reports Server (NTRS)

    He, David; Bechhoefer, Eric; Dempsey, Paula; Ma, Jinghua

    2012-01-01

    In this paper, an integrated approach for gear health prognostics using particle filters is presented. The presented method effectively addresses the issues in applying particle filters to gear health prognostics by integrating several new components into a particle filter: (1) data mining based techniques to effectively define the degradation state transition and measurement functions using a one-dimensional health index obtained by whitening transform; (2) an unbiased l-step ahead RUL estimator updated with measurement errors. The feasibility of the presented prognostics method is validated using data from a spiral bevel gear case study.

  9. Finite-time output feedback stabilization of high-order uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Jiang, Meng-Meng; Xie, Xue-Jun; Zhang, Kemei

    2018-06-01

    This paper studies the problem of finite-time output feedback stabilization for a class of high-order nonlinear systems with the unknown output function and control coefficients. Under the weaker assumption that output function is only continuous, by using homogeneous domination method together with adding a power integrator method, introducing a new analysis method, the maximal open sector Ω of output function is given. As long as output function belongs to any closed sector included in Ω, an output feedback controller can be developed to guarantee global finite-time stability of the closed-loop system.

  10. Skyshine line-beam response functions for 20- to 100-MeV photons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.

  11. Equidistant map projections of a triaxial ellipsoid with the use of reduced coordinates

    NASA Astrophysics Data System (ADS)

    Pędzich, Paweł

    2017-12-01

    The paper presents a new method of constructing equidistant map projections of a triaxial ellipsoid as a function of reduced coordinates. Equations for x and y coordinates are expressed with the use of the normal elliptic integral of the second kind and Jacobian elliptic functions. This solution allows to use common known and widely described in literature methods of solving such integrals and functions. The main advantage of this method is the fact that the calculations of x and y coordinates are practically based on a single algorithm that is required to solve the elliptic integral of the second kind. Equations are provided for three types of map projections: cylindrical, azimuthal and pseudocylindrical. These types of projections are often used in planetary cartography for presentation of entire and polar regions of extraterrestrial objects. The paper also contains equations for the calculation of the length of a meridian and a parallel of a triaxial ellipsoid in reduced coordinates. Moreover, graticules of three coordinates systems (planetographic, planetocentric and reduced) in developed map projections are presented. The basic properties of developed map projections are also described. The obtained map projections may be applied in planetary cartography in order to create maps of extraterrestrial objects.

  12. Functional Module Search in Protein Networks based on Semantic Similarity Improves the Analysis of Proteomics Data*

    PubMed Central

    Boyanova, Desislava; Nilla, Santosh; Klau, Gunnar W.; Dandekar, Thomas; Müller, Tobias; Dittrich, Marcus

    2014-01-01

    The continuously evolving field of proteomics produces increasing amounts of data while improving the quality of protein identifications. Albeit quantitative measurements are becoming more popular, many proteomic studies are still based on non-quantitative methods for protein identification. These studies result in potentially large sets of identified proteins, where the biological interpretation of proteins can be challenging. Systems biology develops innovative network-based methods, which allow an integrated analysis of these data. Here we present a novel approach, which combines prior knowledge of protein-protein interactions (PPI) with proteomics data using functional similarity measurements of interacting proteins. This integrated network analysis exactly identifies network modules with a maximal consistent functional similarity reflecting biological processes of the investigated cells. We validated our approach on small (H9N2 virus-infected gastric cells) and large (blood constituents) proteomic data sets. Using this novel algorithm, we identified characteristic functional modules in virus-infected cells, comprising key signaling proteins (e.g. the stress-related kinase RAF1) and demonstrate that this method allows a module-based functional characterization of cell types. Analysis of a large proteome data set of blood constituents resulted in clear separation of blood cells according to their developmental origin. A detailed investigation of the T-cell proteome further illustrates how the algorithm partitions large networks into functional subnetworks each representing specific cellular functions. These results demonstrate that the integrated network approach not only allows a detailed analysis of proteome networks but also yields a functional decomposition of complex proteomic data sets and thereby provides deeper insights into the underlying cellular processes of the investigated system. PMID:24807868

  13. General Path-Integral Successive-Collision Solution of the Bounded Dynamic Multi-Swarm Problem.

    DTIC Science & Technology

    1983-09-23

    coefficients (i.e., moments of the distribution functions), and/or (il) fnding the distribution functions themselves. The present work is concerned with the...collisions since their first appearance in the system. By definition, a swarm particle sufers a *generalized collision" either when it collides with a...studies6-rand the present work have contributed to- wards making the path-integral successive-collision method a practicable tool of transport theory

  14. Computer integrated manufacturing/processing in the HPI. [Hydrocarbon Processing Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshimura, J.S.

    1993-05-01

    Hydrocarbon Processing and Systemhouse Inc., developed a comprehensive survey on the status of computer integrated manufacturing/processing (CIM/CIP) targeted specifically to the unique requirements of the hydrocarbon processing industry. These types of surveys and other benchmarking techniques can be invaluable in assisting companies to maximize business benefits from technology investments. The survey was organized into 5 major areas: CIM/CIP planning, management perspective, functional applications, integration and technology infrastructure and trends. The CIM/CIP planning area dealt with the use and type of planning methods to plan, justify implement information technology projects. The management perspective section addressed management priorities, expenditure levels and implementationmore » barriers. The functional application area covered virtually all functional areas of organization and focused on the specific solutions and benefits in each of the functional areas. The integration section addressed the needs and integration status of the organization's functional areas. Finally, the technology infrastructure and trends section dealt with specific technologies in use as well as trends over the next three years. In February 1993, summary areas from preliminary results were presented at the 2nd International Conference on Productivity and Quality in the Hydrocarbon Processing Industry.« less

  15. Using EIGER for Antenna Design and Analysis

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Khayat, Michael; Kennedy, Timothy F.; Fink, Patrick W.

    2007-01-01

    EIGER (Electromagnetic Interactions GenERalized) is a frequency-domain electromagnetics software package that is built upon a flexible framework, designed using object-oriented techniques. The analysis methods used include moment method solutions of integral equations, finite element solutions of partial differential equations, and combinations thereof. The framework design permits new analysis techniques (boundary conditions, Green#s functions, etc.) to be added to the software suite with a sensible effort. The code has been designed to execute (in serial or parallel) on a wide variety of platforms from Intel-based PCs and Unix-based workstations. Recently, new potential integration scheme s that avoid singularity extraction techniques have been added for integral equation analysis. These new integration schemes are required for facilitating the use of higher-order elements and basis functions. Higher-order elements are better able to model geometrical curvature using fewer elements than when using linear elements. Higher-order basis functions are beneficial for simulating structures with rapidly varying fields or currents. Results presented here will demonstrate curren t and future capabilities of EIGER with respect to analysis of installed antenna system performance in support of NASA#s mission of exploration. Examples include antenna coupling within an enclosed environment and antenna analysis on electrically large manned space vehicles.

  16. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  17. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  18. The supersymmetric method in random matrix theory and applications to QCD

    NASA Astrophysics Data System (ADS)

    Verbaarschot, Jacobus

    2004-12-01

    The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.

  19. The statistical theory of the fracture of fragile bodies. Part 2: The integral equation method

    NASA Technical Reports Server (NTRS)

    Kittl, P.

    1984-01-01

    It is demonstrated how with the aid of a bending test, the Weibull fracture risk function can be determined - without postulating its analytical form - by resolving an integral equation. The respective solutions for rectangular and circular section beams are given. In the first case the function is expressed as an algorithm and in the second, in the form of series. Taking into account that the cumulative fracture probability appearing in the solution to the integral equation must be continuous and monotonically increasing, any case of fabrication or selection of samples can be treated.

  20. A computational method for the Helmholtz equation in unbounded domains based on the minimization of an integral functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraolo, Giulio, E-mail: g.ciraolo@math.unipa.it; Gargano, Francesco, E-mail: gargano@math.unipa.it; Sciacca, Vincenzo, E-mail: sciacca@math.unipa.it

    2013-08-01

    We study a new approach to the problem of transparent boundary conditions for the Helmholtz equation in unbounded domains. Our approach is based on the minimization of an integral functional arising from a volume integral formulation of the radiation condition. The index of refraction does not need to be constant at infinity and may have some angular dependency as well as perturbations. We prove analytical results on the convergence of the approximate solution. Numerical examples for different shapes of the artificial boundary and for non-constant indexes of refraction will be presented.

  1. Bidirectional reflection functions from surface bump maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabral, B.; Max, N.; Springmeyer, R.

    1987-04-29

    The Torrance-Sparrow model for calculating bidirectional reflection functions contains a geometrical attenuation factor to account for shadowing and occlusions in a hypothetical distribution of grooves on a rough surface. Using an efficient table-based method for determining the shadows and occlusions, we calculate the geometric attenuation factor for surfaces defined by a specific table of bump heights. Diffuse and glossy specular reflection of the environment can be handled in a unified manner by using an integral of the bidirectional reflection function times the environmental illumination, over the hemisphere of solid angle above a surface. We present a method of estimating themore » integral, by expanding the bidirectional reflection coefficient in spherical harmonics, and show how the coefficients in this expansion can be determined efficiently by reorganizing our geometric attenuation calculation.« less

  2. Harmonic-phase path-integral approximation of thermal quantum correlation functions

    NASA Astrophysics Data System (ADS)

    Robertson, Christopher; Habershon, Scott

    2018-03-01

    We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.

  3. Translation and integration of numerical atomic orbitals in linear molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinäsmäki, Sami, E-mail: sami.heinasmaki@gmail.com

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  4. Single Cell Spectroscopy: Noninvasive Measures of Small-Scale Structure and Function

    PubMed Central

    Mousoulis, Charilaos; Xu, Xin; Reiter, David A.; Neu, Corey P.

    2013-01-01

    The advancement of spectroscopy methods attained through increases in sensitivity, and often with the coupling of complementary techniques, has enabled real-time structure and function measurements of single cells. The purpose of this review is to illustrate, in light of advances, the strengths and the weaknesses of these methods. Included also is an assessment of the impact of the experimental setup and conditions of each method on cellular function and integrity. A particular emphasis is placed on noninvasive and nondestructive techniques for achieving single cell detection, including nuclear magnetic resonance, in addition to physical, optical, and vibrational methods. PMID:23886910

  5. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  6. Integration of design and inspection

    NASA Astrophysics Data System (ADS)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  7. Development of and Clinical Experience with a Simple Device for Performing Intraoperative Fluorescein Fluorescence Cerebral Angiography: Technical Notes.

    PubMed

    Ichikawa, Tsuyoshi; Suzuki, Kyouichi; Watanabe, Yoichi; Sato, Taku; Sakuma, Jun; Saito, Kiyoshi

    2016-01-01

    To perform intraoperative fluorescence angiography (FAG) under a microscope without an integrated FAG function with reasonable cost and sufficient quality for evaluation, we made a small and easy to use device for fluorescein FAG (FAG filter). We investigated the practical use of this FAG filter during aneurysm surgery, revascularization surgery, and brain tumor surgery. The FAG filter consists of two types of filters: an excitatory filter and a barrier filter. The excitatory filter excludes all wavelengths except for blue light and the barrier filter passes long waves except for blue light. By adding this FAG filter to a microscope without an integrated FAG function, light from the microscope illuminating the surgical field becomes blue, which is blocked by the barrier filter. We put the FAG filter on the objective lens of the operating microscope correctly and fluorescein sodium was injected intravenously or intra-arterially. Fluorescence (green light) from vessels in the surgical field and the dyed tumor were clearly observed through the microscope and recorded by a memory device. This method was easy and could be performed in a short time (about 10 seconds). Blood flow of small vessels deep in the surgical field could be observed. Blood flow stagnation could be evaluated. However, images from this method were inferior to those obtained by currently commercially available microscopes with an integrated FAG function. In brain tumor surgery, a stained tumor on the brain surface could be observed using this method. FAG could be performed with a microscope without an integrated FAG function easily with only this FAG filter.

  8. Development of and Clinical Experience with a Simple Device for Performing Intraoperative Fluorescein Fluorescence Cerebral Angiography: Technical Notes

    PubMed Central

    ICHIKAWA, Tsuyoshi; SUZUKI, Kyouichi; WATANABE, Yoichi; SATO, Taku; SAKUMA, Jun; SAITO, Kiyoshi

    2016-01-01

    To perform intraoperative fluorescence angiography (FAG) under a microscope without an integrated FAG function with reasonable cost and sufficient quality for evaluation, we made a small and easy to use device for fluorescein FAG (FAG filter). We investigated the practical use of this FAG filter during aneurysm surgery, revascularization surgery, and brain tumor surgery. The FAG filter consists of two types of filters: an excitatory filter and a barrier filter. The excitatory filter excludes all wavelengths except for blue light and the barrier filter passes long waves except for blue light. By adding this FAG filter to a microscope without an integrated FAG function, light from the microscope illuminating the surgical field becomes blue, which is blocked by the barrier filter. We put the FAG filter on the objective lens of the operating microscope correctly and fluorescein sodium was injected intravenously or intra-arterially. Fluorescence (green light) from vessels in the surgical field and the dyed tumor were clearly observed through the microscope and recorded by a memory device. This method was easy and could be performed in a short time (about 10 seconds). Blood flow of small vessels deep in the surgical field could be observed. Blood flow stagnation could be evaluated. However, images from this method were inferior to those obtained by currently commercially available microscopes with an integrated FAG function. In brain tumor surgery, a stained tumor on the brain surface could be observed using this method. FAG could be performed with a microscope without an integrated FAG function easily with only this FAG filter. PMID:26597335

  9. Analytic Method for Computing Instrument Pointing Jitter

    NASA Technical Reports Server (NTRS)

    Bayard, David

    2003-01-01

    A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.

  10. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.

  11. Combination of complex momentum representation and Green's function methods in relativistic mean-field theory

    NASA Astrophysics Data System (ADS)

    Shi, Min; Niu, Zhong-Ming; Liang, Haozhao

    2018-06-01

    We have combined the complex momentum representation method with the Green's function method in the relativistic mean-field framework to establish the RMF-CMR-GF approach. This new approach is applied to study the halo structure of 74Ca. All the continuum level density of concerned resonant states are calculated accurately without introducing any unphysical parameters, and they are independent of the choice of integral contour. The important single-particle wave functions and densities for the halo phenomenon in 74Ca are discussed in detail.

  12. Robotically facilitated virtual rehabilitation of arm transport integrated with finger movement in persons with hemiparesis

    PubMed Central

    2011-01-01

    Background Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Methods Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. Results The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Conclusions Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training. PMID:21575185

  13. A Guided Tour of Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Snieder, Roel; van Wijk, Kasper

    2015-05-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical coordinates; 5. Gradient; 6. Divergence of a vector field; 7. Curl of a vector field; 8. Theorem of Gauss; 9. Theorem of Stokes; 10. The Laplacian; 11. Scale analysis; 12. Linear algebra; 13. Dirac delta function; 14. Fourier analysis; 15. Analytic functions; 16. Complex integration; 17. Green's functions: principles; 18. Green's functions: examples; 19. Normal modes; 20. Potential-field theory; 21. Probability and statistics; 22. Inverse problems; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Conservation laws; 26. Cartesian tensors; 27. Variational calculus; 28. Epilogue on power and knowledge.

  14. An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2003-01-01

    Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.

  15. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  16. Using historical perspective in designing discovery learning on Integral for undergraduate students

    NASA Astrophysics Data System (ADS)

    Abadi; Fiangga, S.

    2018-01-01

    In the course of Integral Calculus, to be able to calculate an integral of a given function is becoming the main idea in the teaching beside the ability in implementing the application of integral. The students tend to be unable to understand the conceptual idea of what is integration actually. One of the promising perspectives that can be used to invite students to discover the idea of integral is the History and Pedagogy Mathematics (HPM). The method of exhaustion and indivisible appear in the discussion on the early history of area measurement. This paper study will discuss the designed learning activities based on the method of exhaustion and indivisible in providing the undergraduate student’s discovery materials for integral using design research. The designed learning activities were conducted into design experiment that consists of three phases, i.e., preliminary, design experimental, and teaching experiment. The teaching experiment phase was conducted in two cycles for refinement purpose. The finding suggests that the implementation of the method of exhaustion and indivisible enable students to reinvent the idea of integral by using the concept of derivative.

  17. Application of the critical pathway and integrated case teaching method to nursing orientation.

    PubMed

    Goodman, D

    1997-01-01

    Nursing staff development programs must be responsive to current changes in healthcare. New nursing staff must be prepared to manage continuous change and to function competently in clinical practice. The orientation pathway, based on a case management model, is used as a structure for the orientation phase of staff development. The integrated case is incorporated as a teaching strategy in orientation. The integrated case method is based on discussion and analysis of patient situations with emphasis on role modeling and integration of theory and skill. The orientation pathway and integrated case teaching method provide a useful framework for orientation of new staff. Educators, preceptors and orientees find the structure provided by the orientation pathway very useful. Orientation that is developed, implemented and evaluated based on a case management model with the use of an orientation pathway and incorporation of an integrated case teaching method provides a standardized structure for orientation of new staff. This approach is designed for the adult learner, promotes conceptual reasoning, and encourages the social and contextual basis for continued learning.

  18. Energize It! An Ecologically Integrated Approach to the Study of the Digestive System and Energy Acquisition.

    ERIC Educational Resources Information Center

    Derting, Terry L.

    1992-01-01

    Develops a research-oriented method of studying the digestive system that integrates species' ecology with the form and function of this system. Uses problem-posing, problem-probing, and peer persuasion. Presents information for mammalian systems. (27 references) (MKR)

  19. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less

  20. An integrative approach for measuring semantic similarities using gene ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2014-01-01

    Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.

  1. Integrated Detection and Prediction of Influenza Activity for Real-Time Surveillance: Algorithm Design

    PubMed Central

    2017-01-01

    Background Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic “big data” from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. Objective The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. Methods An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. Results The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Conclusions Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. PMID:28619700

  2. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-03-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  3. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-07-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  4. Challenges in microbial ecology: building predictive understanding of community function and dynamics

    PubMed Central

    Widder, Stefanie; Allen, Rosalind J; Pfeiffer, Thomas; Curtis, Thomas P; Wiuf, Carsten; Sloan, William T; Cordero, Otto X; Brown, Sam P; Momeni, Babak; Shou, Wenying; Kettle, Helen; Flint, Harry J; Haas, Andreas F; Laroche, Béatrice; Kreft, Jan-Ulrich; Rainey, Paul B; Freilich, Shiri; Schuster, Stefan; Milferstedt, Kim; van der Meer, Jan R; Groβkopf, Tobias; Huisman, Jef; Free, Andrew; Picioreanu, Cristian; Quince, Christopher; Klapper, Isaac; Labarthe, Simon; Smets, Barth F; Wang, Harris; Soyer, Orkun S

    2016-01-01

    The importance of microbial communities (MCs) cannot be overstated. MCs underpin the biogeochemical cycles of the earth's soil, oceans and the atmosphere, and perform ecosystem functions that impact plants, animals and humans. Yet our ability to predict and manage the function of these highly complex, dynamically changing communities is limited. Building predictive models that link MC composition to function is a key emerging challenge in microbial ecology. Here, we argue that addressing this challenge requires close coordination of experimental data collection and method development with mathematical model building. We discuss specific examples where model–experiment integration has already resulted in important insights into MC function and structure. We also highlight key research questions that still demand better integration of experiments and models. We argue that such integration is needed to achieve significant progress in our understanding of MC dynamics and function, and we make specific practical suggestions as to how this could be achieved. PMID:27022995

  5. Noninvasive assessment of arterial function in children: clinical applications

    PubMed Central

    Aggoun, Y; Beghetti, M

    2002-01-01

    Non invasive methods to assess arterial function are widely used in adults. The development and progression of arterial vascular disease is a multifactorial process that can start early in life, thus even in a pediatric population. Risk factors for cardiovascular disease mediate their effects by altering the structure, properties and function of wall and endothelial components of the arterial blood vessels. The ability to detect and monitor sub-clinical damage, representing the cumulative and integrated influence of risk factors in impairing arterial wall integrity, holds potential to further refine cardiovascular risk stratification and enable early intervention to prevent or attenuate disease progression. Measurements that provide more direct information in relation to changes in arterial wall integrity clearly hold predictive and therapeutic potential. The aim of this current review will be to describe the non-invasive procedure used in children to investigate the mechanical properties of a great elastic artery, the common carotid, and the endothelial function of the brachial artery. The accuracy of recording noninvasively the blood pressure wave contour along the arterial tree has been improved by the technique of applanation tonometry. The results obtained with these methods in previous studies are described. PMID:22368620

  6. Silicon photonic integrated circuit swept-source optical coherence tomography receiver with dual polarization, dual balanced, in-phase and quadrature detection.

    PubMed

    Wang, Zhao; Lee, Hsiang-Chieh; Vermeulen, Diedrik; Chen, Long; Nielsen, Torben; Park, Seo Yeon; Ghaemi, Allan; Swanson, Eric; Doerr, Chris; Fujimoto, James

    2015-07-01

    Optical coherence tomography (OCT) is a widely used three-dimensional (3D) optical imaging method with many biomedical and non-medical applications. Miniaturization, cost reduction, and increased functionality of OCT systems will be critical for future emerging clinical applications. We present a silicon photonic integrated circuit swept-source OCT (SS-OCT) coherent receiver with dual polarization, dual balanced, in-phase and quadrature (IQ) detection. We demonstrate multiple functional capabilities of IQ polarization resolved detection including: complex-conjugate suppressed full-range OCT, polarization diversity detection, and polarization-sensitive OCT. To our knowledge, this is the first demonstration of a silicon photonic integrated receiver for OCT. The integrated coherent receiver provides a miniaturized, low-cost solution for SS-OCT, and is also a key step towards a fully integrated high speed SS-OCT system with good performance and multi-functional capabilities. With further performance improvement and cost reduction, photonic integrated technology promises to greatly increase penetration of OCT systems in existing applications and enable new applications.

  7. Silicon photonic integrated circuit swept-source optical coherence tomography receiver with dual polarization, dual balanced, in-phase and quadrature detection

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Vermeulen, Diedrik; Chen, Long; Nielsen, Torben; Park, Seo Yeon; Ghaemi, Allan; Swanson, Eric; Doerr, Chris; Fujimoto, James

    2015-01-01

    Optical coherence tomography (OCT) is a widely used three-dimensional (3D) optical imaging method with many biomedical and non-medical applications. Miniaturization, cost reduction, and increased functionality of OCT systems will be critical for future emerging clinical applications. We present a silicon photonic integrated circuit swept-source OCT (SS-OCT) coherent receiver with dual polarization, dual balanced, in-phase and quadrature (IQ) detection. We demonstrate multiple functional capabilities of IQ polarization resolved detection including: complex-conjugate suppressed full-range OCT, polarization diversity detection, and polarization-sensitive OCT. To our knowledge, this is the first demonstration of a silicon photonic integrated receiver for OCT. The integrated coherent receiver provides a miniaturized, low-cost solution for SS-OCT, and is also a key step towards a fully integrated high speed SS-OCT system with good performance and multi-functional capabilities. With further performance improvement and cost reduction, photonic integrated technology promises to greatly increase penetration of OCT systems in existing applications and enable new applications. PMID:26203382

  8. Lefschetz thimbles in fermionic effective models with repulsive vector-field

    NASA Astrophysics Data System (ADS)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2018-06-01

    We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.

  9. Neural Signatures of Autism Spectrum Disorders: Insights into Brain Network Dynamics

    PubMed Central

    Hernandez, Leanna M; Rudie, Jeffrey D; Green, Shulamite A; Bookheimer, Susan; Dapretto, Mirella

    2015-01-01

    Neuroimaging investigations of autism spectrum disorders (ASDs) have advanced our understanding of atypical brain function and structure, and have recently converged on a model of altered network-level connectivity. Traditional task-based functional magnetic resonance imaging (MRI) and volume-based structural MRI studies have identified widespread atypicalities in brain regions involved in social behavior and other core ASD-related behavioral deficits. More recent advances in MR-neuroimaging methods allow for quantification of brain connectivity using diffusion tensor imaging, functional connectivity, and graph theoretic methods. These newer techniques have moved the field toward a systems-level understanding of ASD etiology, integrating functional and structural measures across distal brain regions. Neuroimaging findings in ASD as a whole have been mixed and at times contradictory, likely due to the vast genetic and phenotypic heterogeneity characteristic of the disorder. Future longitudinal studies of brain development will be crucial to yield insights into mechanisms of disease etiology in ASD sub-populations. Advances in neuroimaging methods and large-scale collaborations will also allow for an integrated approach linking neuroimaging, genetics, and phenotypic data. PMID:25011468

  10. Quadrature, Interpolation and Observability

    NASA Technical Reports Server (NTRS)

    Hodges, Lucille McDaniel

    1997-01-01

    Methods of interpolation and quadrature have been used for over 300 years. Improvements in the techniques have been made by many, most notably by Gauss, whose technique applied to polynomials is referred to as Gaussian Quadrature. Stieltjes extended Gauss's method to certain non-polynomial functions as early as 1884. Conditions that guarantee the existence of quadrature formulas for certain collections of functions were studied by Tchebycheff, and his work was extended by others. Today, a class of functions which satisfies these conditions is called a Tchebycheff System. This thesis contains the definition of a Tchebycheff System, along with the theorems, proofs, and definitions necessary to guarantee the existence of quadrature formulas for such systems. Solutions of discretely observable linear control systems are of particular interest, and observability with respect to a given output function is defined. The output function is written as a linear combination of a collection of orthonormal functions. Orthonormal functions are defined, and their properties are discussed. The technique for evaluating the coefficients in the output function involves evaluating the definite integral of functions which can be shown to form a Tchebycheff system. Therefore, quadrature formulas for these integrals exist, and in many cases are known. The technique given is useful in cases where the method of direct calculation is unstable. The condition number of a matrix is defined and shown to be an indication of the the degree to which perturbations in data affect the accuracy of the solution. In special cases, the number of data points required for direct calculation is the same as the number required by the method presented in this thesis. But the method is shown to require more data points in other cases. A lower bound for the number of data points required is given.

  11. Integrated Detection and Prediction of Influenza Activity for Real-Time Surveillance: Algorithm Design.

    PubMed

    Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas

    2017-06-15

    Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. ©Armin Spreco, Olle Eriksson, Örjan Dahlström, Benjamin John Cowling, Toomas Timpka. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.06.2017.

  12. Development of a modularized two-step (M2S) chromosome integration technique for integration of multiple transcription units in Saccharomyces cerevisiae.

    PubMed

    Li, Siwei; Ding, Wentao; Zhang, Xueli; Jiang, Huifeng; Bi, Changhao

    2016-01-01

    Saccharomyces cerevisiae has already been used for heterologous production of fuel chemicals and valuable natural products. The establishment of complicated heterologous biosynthetic pathways in S. cerevisiae became the research focus of Synthetic Biology and Metabolic Engineering. Thus, simple and efficient genomic integration techniques of large number of transcription units are demanded urgently. An efficient DNA assembly and chromosomal integration method was created by combining homologous recombination (HR) in S. cerevisiae and Golden Gate DNA assembly method, designated as modularized two-step (M2S) technique. Two major assembly steps are performed consecutively to integrate multiple transcription units simultaneously. In Step 1, Modularized scaffold containing a head-to-head promoter module and a pair of terminators was assembled with two genes. Thus, two transcription units were assembled with Golden Gate method into one scaffold in one reaction. In Step 2, the two transcription units were mixed with modules of selective markers and integration sites and transformed into S. cerevisiae for assembly and integration. In both steps, universal primers were designed for identification of correct clones. Establishment of a functional β-carotene biosynthetic pathway in S. cerevisiae within 5 days demonstrated high efficiency of this method, and a 10-transcriptional-unit pathway integration illustrated the capacity of this method. Modular design of transcription units and integration elements simplified assembly and integration procedure, and eliminated frequent designing and synthesis of DNA fragments in previous methods. Also, by assembling most parts in Step 1 in vitro, the number of DNA cassettes for homologous integration in Step 2 was significantly reduced. Thus, high assembly efficiency, high integration capacity, and low error rate were achieved.

  13. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  14. The fine art of integral membrane protein crystallisation.

    PubMed

    Birch, James; Axford, Danny; Foadi, James; Meyer, Arne; Eckhardt, Annette; Thielmann, Yvonne; Moraes, Isabel

    2018-05-18

    Integral membrane proteins are among the most fascinating and important biomolecules as they play a vital role in many biological functions. Knowledge of their atomic structures is fundamental to the understanding of their biochemical function and key in many drug discovery programs. However, over the years, structure determination of integral membrane proteins has proven to be far from trivial, hence they are underrepresented in the protein data bank. Low expression levels, insolubility and instability are just a few of the many hurdles one faces when studying these proteins. X-ray crystallography has been the most used method to determine atomic structures of membrane proteins. However, the production of high quality membrane protein crystals is always very challenging, often seen more as art than a rational experiment. Here we review valuable approaches, methods and techniques to successful membrane protein crystallisation. Copyright © 2018 Diamond Light Source LTD. Published by Elsevier Inc. All rights reserved.

  15. Blending Two Major Techniques in Order to Compute [Pi

    ERIC Educational Resources Information Center

    Guasti, M. Fernandez

    2005-01-01

    Three major techniques are employed to calculate [pi]. Namely, (i) the perimeter of polygons inscribed or circumscribed in a circle, (ii) calculus based methods using integral representations of inverse trigonometric functions, and (iii) modular identities derived from the transformation theory of elliptic integrals. This note presents a…

  16. Design and Implementation of an Interdisciplinary Marketing/Management Course on Technology and Innovation Management

    ERIC Educational Resources Information Center

    Athaide, Gerard A.; Desai, Harsha B.

    2005-01-01

    Given increasing industry demand for integrative learning, marketing curricula need to emphasize interdisciplinary approaches to teaching. Although team teaching is a useful method for achieving cross-functional integration, there are very few frameworks for effectively implementing team teaching. Consequently, marketing educators seeking to offer…

  17. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  18. High efficiency integration of three-dimensional functional microdevices inside a microfluidic chip by using femtosecond laser multifoci parallel microfabrication

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Du, Wen-Qiang; Li, Jia-Wen; Hu, Yan-Lei; Yang, Liang; Zhang, Chen-Chu; Li, Guo-Qiang; Lao, Zhao-Xin; Ni, Jin-Cheng; Chu, Jia-Ru; Wu, Dong; Liu, Su-Ling; Sugioka, Koji

    2016-01-01

    High efficiency fabrication and integration of three-dimension (3D) functional devices in Lab-on-a-chip systems are crucial for microfluidic applications. Here, a spatial light modulator (SLM)-based multifoci parallel femtosecond laser scanning technology was proposed to integrate microstructures inside a given ‘Y’ shape microchannel. The key novelty of our approach lies on rapidly integrating 3D microdevices inside a microchip for the first time, which significantly reduces the fabrication time. The high quality integration of various 2D-3D microstructures was ensured by quantitatively optimizing the experimental conditions including prebaking time, laser power and developing time. To verify the designable and versatile capability of this method for integrating functional 3D microdevices in microchannel, a series of microfilters with adjustable pore sizes from 12.2 μm to 6.7 μm were fabricated to demonstrate selective filtering of the polystyrene (PS) particles and cancer cells with different sizes. The filter can be cleaned by reversing the flow and reused for many times. This technology will advance the fabrication technique of 3D integrated microfluidic and optofluidic chips.

  19. Fractional spectral and pseudo-spectral methods in unbounded domains: Theory and applications

    NASA Astrophysics Data System (ADS)

    Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R.

    2017-06-01

    This paper is intended to provide exponentially accurate Galerkin, Petrov-Galerkin and pseudo-spectral methods for fractional differential equations on a semi-infinite interval. We start our discussion by introducing two new non-classical Lagrange basis functions: NLBFs-1 and NLBFs-2 which are based on the two new families of the associated Laguerre polynomials: GALFs-1 and GALFs-2 obtained recently by the authors in [28]. With respect to the NLBFs-1 and NLBFs-2, two new non-classical interpolants based on the associated- Laguerre-Gauss and Laguerre-Gauss-Radau points are introduced and then fractional (pseudo-spectral) differentiation (and integration) matrices are derived. Convergence and stability of the new interpolants are proved in detail. Several numerical examples are considered to demonstrate the validity and applicability of the basis functions to approximate fractional derivatives (and integrals) of some functions. Moreover, the pseudo-spectral, Galerkin and Petrov-Galerkin methods are successfully applied to solve some physical ordinary differential equations of either fractional orders or integer ones. Some useful comments from the numerical point of view on Galerkin and Petrov-Galerkin methods are listed at the end.

  20. A Generalized Technique in Numerical Integration

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan

    2018-02-01

    Integration by parts is one of the most popular techniques in the analysis of integrals and is one of the simplest methods to generate asymptotic expansions of integral representations. The product of the technique is usually a divergent series formed from evaluating boundary terms; however, sometimes the remaining integral is also evaluated. Due to the successive differentiation and anti-differentiation required to form the series or the remaining integral, the technique is difficult to apply to problems more complicated than the simplest. In this contribution, we explore a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As a demonstrative archetype, we examine Bessel integrals, Fresnel integrals and Airy functions.

  1. Green's function solution to heat transfer of a transparent gas through a tube

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.

    1989-01-01

    A heat transfer analysis of a transparent gas flowing through a circular tube of finite thickness is presented. This study includes the effects of wall conduction, internal radiative exchange, and convective heat transfer. The natural mathematical formulation produces a nonlinear, integrodifferential equation governing the wall temperature and an ordinary differential equation describing the gas temperature. This investigation proposes to convert the original system of equations into an equivalent system of integral equations. The Green's function method permits the conversion of an integrodifferential equation into a pure integral equation. The proposed integral formulation and subsequent computational procedure are shown to be stable and accurate.

  2. Nano-array integrated monolithic devices: toward rational materials design and multi-functional performance by scalable nanostructures assembly

    DOE PAGES

    Wang, Sibo; Ren, Zheng; Guo, Yanbing; ...

    2016-03-21

    We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less

  3. Nano-array integrated monolithic devices: toward rational materials design and multi-functional performance by scalable nanostructures assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Sibo; Ren, Zheng; Guo, Yanbing

    We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less

  4. A Preliminary Investigation on Improving Functional Communication Training by Mitigating Resurgence of Destructive Behavior

    ERIC Educational Resources Information Center

    Fuhrman, Ashley M.; Fisher, Wayne W.; Greer, Brian D.

    2016-01-01

    Despite the effectiveness and widespread use of functional communication training (FCT), resurgence of destructive behavior can occur if the functional communication response (FCR) contacts a challenge, such as lapses in treatment integrity. We evaluated a method to mitigate resurgence by conducting FCT using a multiple schedule of reinforcement…

  5. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  6. A Unified Method of Finding Laplace Transforms, Fourier Transforms, and Fourier Series. [and] An Inversion Method for Laplace Transforms, Fourier Transforms, and Fourier Series. Integral Transforms and Series Expansions. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 324 and 325.

    ERIC Educational Resources Information Center

    Grimm, C. A.

    This document contains two units that examine integral transforms and series expansions. In the first module, the user is expected to learn how to use the unified method presented to obtain Laplace transforms, Fourier transforms, complex Fourier series, real Fourier series, and half-range sine series for given piecewise continuous functions. In…

  7. Harnessing glycomics technologies: integrating structure with function for glycan characterization

    PubMed Central

    Robinson, Luke N.; Artpradit, Charlermchai; Raman, Rahul; Shriver, Zachary H.; Ruchirawat, Mathuros; Sasisekharan, Ram

    2013-01-01

    Glycans, or complex carbohydrates, are a ubiquitous class of biological molecules which impinge on a variety of physiological processes ranging from signal transduction to tissue development and microbial pathogenesis. In comparison to DNA and proteins, glycans present unique challenges to the study of their structure and function owing to their complex and heterogeneous structures and the dominant role played by multivalency in their sequence-specific biological interactions. Arising from these challenges, there is a need to integrate information from multiple complementary methods to decode structure-function relationships. Focusing on acidic glycans, we describe here key glycomics technologies for characterizing their structural attributes, including linkage, modifications, and topology, as well as for elucidating their role in biological processes. Two cases studies, one involving sialylated branched glycans and the other sulfated glycosaminoglycans, are used to highlight how integration of orthogonal information from diverse datasets enables rapid convergence of glycan characterization for development of robust structure-function relationships. PMID:22522536

  8. Scientific and Pragmatic Challenges for Bridging Education and Neuroscience

    ERIC Educational Resources Information Center

    Varma, Sashank; McCandliss, Bruce D.; Schwartz, Daniel L.

    2008-01-01

    Educational neuroscience is an emerging effort to integrate neuroscience methods, particularly functional neuroimaging, with behavioral methods to address issues of learning and instruction. This article consolidates common concerns about connecting education and neuroscience. One set of concerns is scientific: in-principle differences in methods,…

  9. Computational Prediction of the Global Functional Genomic Landscape: Applications, Methods and Challenges

    PubMed Central

    Zhou, Weiqiang; Sherwood, Ben; Ji, Hongkai

    2017-01-01

    Technological advances have led to an explosive growth of high-throughput functional genomic data. Exploiting the correlation among different data types, it is possible to predict one functional genomic data type from other data types. Prediction tools are valuable in understanding the relationship among different functional genomic signals. They also provide a cost-efficient solution to inferring the unknown functional genomic profiles when experimental data are unavailable due to resource or technological constraints. The predicted data may be used for generating hypotheses, prioritizing targets, interpreting disease variants, facilitating data integration, quality control, and many other purposes. This article reviews various applications of prediction methods in functional genomics, discusses analytical challenges, and highlights some common and effective strategies used to develop prediction methods for functional genomic data. PMID:28076869

  10. A robust functional-data-analysis method for data recovery in multichannel sensor systems.

    PubMed

    Sun, Jian; Liao, Haitao; Upadhyaya, Belle R

    2014-08-01

    Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.

  11. A study of structural properties of gene network graphs for mathematical modeling of integrated mosaic gene networks.

    PubMed

    Petrovskaya, Olga V; Petrovskiy, Evgeny D; Lavrik, Inna N; Ivanisenko, Vladimir A

    2017-04-01

    Gene network modeling is one of the widely used approaches in systems biology. It allows for the study of complex genetic systems function, including so-called mosaic gene networks, which consist of functionally interacting subnetworks. We conducted a study of a mosaic gene networks modeling method based on integration of models of gene subnetworks by linear control functionals. An automatic modeling of 10,000 synthetic mosaic gene regulatory networks was carried out using computer experiments on gene knockdowns/knockouts. Structural analysis of graphs of generated mosaic gene regulatory networks has revealed that the most important factor for building accurate integrated mathematical models, among those analyzed in the study, is data on expression of genes corresponding to the vertices with high properties of centrality.

  12. Integration of pyrotechnics into aerospace systems

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Schimmel, Morry L.

    1993-01-01

    The application of pyrotechnics to aerospace systems has been resisted because normal engineering methods cannot be used in design and evaluation. Commonly used approaches for energy sources, such as electrical, hydraulic and pneumatic, do not apply to explosive and pyrotechnic devices. This paper introduces the unique characteristics of pyrotechnic devices, describes how functional evaluations can be conducted, and demonstrates an engineering approach for pyrotechnic integration. Logic is presented that allows evaluation of two basic types of pyrotechnic systems to demonstrate functional margin.

  13. An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements

    PubMed Central

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987

  14. An approach for integrating the prioritization of functional and nonfunctional requirements.

    PubMed

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.

  15. The resolvent of singular integral equations. [of kernel functions in mixed boundary value problems

    NASA Technical Reports Server (NTRS)

    Williams, M. H.

    1977-01-01

    The investigation reported is concerned with the construction of the resolvent for any given kernel function. In problems with ill-behaved inhomogeneous terms as, for instance, in the aerodynamic problem of flow over a flapped airfoil, direct numerical methods become very difficult. A description is presented of a solution method by resolvent which can be employed in such problems.

  16. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  17. Path integral approach to the Wigner representation of canonical density operators for discrete systems coupled to harmonic baths.

    PubMed

    Montoya-Castillo, Andrés; Reichman, David R

    2017-01-14

    We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.

  18. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranken, D.; George, J.

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  19. START: an advanced radiation therapy information system.

    PubMed

    Cocco, A; Valentini, V; Balducci, M; Mantello, G

    1996-01-01

    START is an advanced radiation therapy information system (RTIS) which connects direct information technology present in the devices with indirect information technology for clinical, administrative, information management integrated with the hospital information system (HIS). The following objectives are pursued: to support decision making in treatment planning and functional and information integration with the rest of the hospital; to enhance organizational efficiency of a Radiation Therapy Department; to facilitate the statistical evaluation of clinical data and managerial performance assessment; to ensure the safety and confidentiality of used data. For its development a working method based on the involvement of all operators of the Radiation Therapy Department, was applied. Its introduction in the work activity was gradual, trying to reuse and integrate the existing information applications. The START information flow identifies four major phases: admission, visit of admission, planning, therapy. The system main functionalities available to the radiotherapist are: clinical history/medical report linking function; folder function; planning function; tracking function; electronic mail and banner function; statistical function; management function. Functions available to the radiotherapy technician are: the room daily list function; management function: to the nurse the following functions are available: patient directing function; management function. START is a departmental client (pc-windows)-server (unix) developed on an integrated database of all information of interest (clinical, organizational and administrative) coherent with the standard and with a modular architecture which can evolve with additional functionalities in subsequent times. For a more thorough evaluation of its impact on the daily activity of a radiation therapy facility, a prolonged clinical validation is in progress.

  20. Thermally-isolated silicon-based integrated circuits and related methods

    DOEpatents

    Wojciechowski, Kenneth; Olsson, Roy H.; Clews, Peggy J.; Bauer, Todd

    2017-05-09

    Thermally isolated devices may be formed by performing a series of etches on a silicon-based substrate. As a result of the series of etches, silicon material may be removed from underneath a region of an integrated circuit (IC). The removal of the silicon material from underneath the IC forms a gap between remaining substrate and the integrated circuit, though the integrated circuit remains connected to the substrate via a support bar arrangement that suspends the integrated circuit over the substrate. The creation of this gap functions to release the device from the substrate and create a thermally-isolated integrated circuit.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathasivam, Saratha

    New activation function is examined for its ability to accelerate the performance of doing logic programming in Hopfield network. This method has a higher capacity and upgrades the neuro symbolic integration. Computer simulations are carried out to validate the effectiveness of the new activation function. Empirical results obtained support our theory.

  2. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  3. A study of the displacement of a Wankel rotary engine

    NASA Astrophysics Data System (ADS)

    Beard, J. E.; Pennock, G. R.

    1993-03-01

    The volumetric displacement of a Wankel rotary engine is a function of the trochoid ratio and the pin size ratio, assuming the engine has a unit depth and the number of lobes is specified. The mathematical expression which defines the displacement contains a function which can be evaluated directly and a normal elliptic integral of the second type which does not have an explicit solution. This paper focuses on the contribution of the elliptic integral to the total displacement of the engine. The influence of the elliptic integral is shown to account for as much as 20 percent of the total displacement, depending on the trochoid ratio and the pin size ratio. Two numerical integration techniques are compared in the paper, namely, the trapezoidal rule and Simpson's 1/3 rule. The bounds on the error, associated with each numerical method, are analyzed. The results indicate that the numerical method has a minimal effect on the accuracy of the calculated displacement for a practical number of integration steps. The paper also evaluates the influence of manufacturing tolerances on the calculated displacement and the actual displacement. Finally. a numerical example of the common three-lobed Wankel rotary engine is included for illustrative purposes.

  4. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  5. Innovation in Evaluating the Impact of Integrated Service-Delivery: The Integra Indexes of HIV and Reproductive Health Integration

    PubMed Central

    Mayhew, Susannah H.; Ploubidis, George B.; Sloggett, Andy; Church, Kathryn; Obure, Carol D.; Birdthistle, Isolde; Sweeney, Sedona; Warren, Charlotte E.; Watts, Charlotte; Vassall, Anna

    2016-01-01

    Background The body of knowledge on evaluating complex interventions for integrated healthcare lacks both common definitions of ‘integrated service delivery’ and standard measures of impact. Using multiple data sources in combination with statistical modelling the aim of this study is to develop a measure of HIV-reproductive health (HIV-RH) service integration that can be used to assess the degree of service integration, and the degree to which integration may have health benefits to clients, or reduce service costs. Methods and Findings Data were drawn from the Integra Initiative’s client flow (8,263 clients in Swaziland and 25,539 in Kenya) and costing tools implemented between 2008–2012 in 40 clinics providing RH services in Kenya and Swaziland. We used latent variable measurement models to derive dimensions of HIV-RH integration using these data, which quantified the extent and type of integration between HIV and RH services in Kenya and Swaziland. The modelling produced two clear and uncorrelated dimensions of integration at facility level leading to the development of two sub-indexes: a Structural Integration Index (integrated physical and human resource infrastructure) and a Functional Integration Index (integrated delivery of services to clients). The findings highlight the importance of multi-dimensional assessments of integration, suggesting that structural integration is not sufficient to achieve the integrated delivery of care to clients—i.e. “functional integration”. Conclusions These Indexes are an important methodological contribution for evaluating complex multi-service interventions. They help address the need to broaden traditional evaluations of integrated HIV-RH care through the incorporation of a functional integration measure, to avoid misleading conclusions on its ‘impact’ on health outcomes. This is particularly important for decision-makers seeking to promote integration in resource constrained environments. PMID:26800517

  6. Effects of an Arts Integration Curriculum versus a Non-Arts Integration Curriculum on the School Experiences of Kindergarten through Middle School Students with Autism

    ERIC Educational Resources Information Center

    Batson, Robyne Diane Miles

    2010-01-01

    This study addressed one possible method of instruction for students with high-functioning autism. A treatment and control group were examined to discover if a difference was present in the areas of social skills, communication skills, and classroom behaviors using an arts integration curriculum. The review of literature pointed to a promising…

  7. An integrand reconstruction method for three-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2012-08-01

    We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.

  8. On a method for generating inequalities for the zeros of certain functions

    NASA Astrophysics Data System (ADS)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  9. How to conduct a qualitative meta-analysis: Tailoring methods to enhance methodological integrity.

    PubMed

    Levitt, Heidi M

    2018-05-01

    Although qualitative research has long been of interest in the field of psychology, meta-analyses of qualitative literatures (sometimes called meta-syntheses) are still quite rare. Like quantitative meta-analyses, these methods function to aggregate findings and identify patterns across primary studies, but their aims, procedures, and methodological considerations may vary. This paper explains the function of qualitative meta-analyses and their methodological development. Recommendations have broad relevance but are framed with an eye toward their use in psychotherapy research. Rather than arguing for the adoption of any single meta-method, this paper advocates for considering how procedures can best be selected and adapted to enhance a meta-study's methodological integrity. Through the paper, recommendations are provided to help researchers identify procedures that can best serve their studies' specific goals. Meta-analysts are encouraged to consider the methodological integrity of their studies in relation to central research processes, including identifying a set of primary research studies, transforming primary findings into initial units of data for a meta-analysis, developing categories or themes, and communicating findings. The paper provides guidance for researchers who desire to tailor meta-analytic methods to meet their particular goals while enhancing the rigor of their research.

  10. A two-step, fourth-order method with energy preserving properties

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato

    2012-09-01

    We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.

  11. Interorganisational Integration: Healthcare Professionals’ Perspectives on Barriers and Facilitators within the Danish Healthcare System

    PubMed Central

    Godtfredsen, Nina Skavlan; Frølich, Anne

    2016-01-01

    Introduction: Despite many initiatives to improve coordination of patient pathways and intersectoral cooperation, Danish health care is still fragmented, lacking intra- and interorganisational integration. This study explores barriers to and facilitators of interorganisational integration as perceived by healthcare professionals caring for patients with chronic obstructive pulmonary disease within the Danish healthcare system. Methods: Seven focus groups were conducted in January through July 2014 with 21 informants from general practice, local healthcare centres and a pulmonary department at a university hospital in the Capital Region of Denmark. Results and discussion: Our results can be grouped into five influencing areas for interorganisational integration: communication/information transfer, committed leadership, patient engagement, the role and competencies of the general practitioner and organisational culture. Proposed solutions to barriers in each area hold the potential to improve care integration as experienced by individuals responsible for supporting and facilitating it. Barriers and facilitators to integrating care relate to clinical, professional, functional and normative integration. Especially, clinical, functional and normative integration seems fundamental to developing integrated care in practice from the perspective of healthcare professionals. PMID:27616948

  12. An integrative method for testing form–function linkages and reconstructed evolutionary pathways of masticatory specialization

    PubMed Central

    Tseng, Z. Jack; Flynn, John J.

    2015-01-01

    Morphology serves as a ubiquitous proxy in macroevolutionary studies to identify potential adaptive processes and patterns. Inferences of functional significance of phenotypes or their evolution are overwhelmingly based on data from living taxa. Yet, correspondence between form and function has been tested in only a few model species, and those linkages are highly complex. The lack of explicit methodologies to integrate form and function analyses within a deep-time and phylogenetic context weakens inferences of adaptive morphological evolution, by invoking but not testing form–function linkages. Here, we provide a novel approach to test mechanical properties at reconstructed ancestral nodes/taxa and the strength and direction of evolutionary pathways in feeding biomechanics, in a case study of carnivorous mammals. Using biomechanical profile comparisons that provide functional signals for the separation of feeding morphologies, we demonstrate, using experimental optimization criteria on estimation of strength and direction of functional changes on a phylogeny, that convergence in mechanical properties and degree of evolutionary optimization can be decoupled. This integrative approach is broadly applicable to other clades, by using quantitative data and model-based tests to evaluate interpretations of function from morphology and functional explanations for observed macroevolutionary pathways. PMID:25994295

  13. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  14. Fundamentals of functional imaging II: emerging MR techniques and new methods of analysis.

    PubMed

    Luna, A; Martín Noguerol, T; Mata, L Alcalá

    2018-05-01

    Current multiparameter MRI protocols integrate structural, physiological, and metabolic information about cancer. Emerging techniques such as arterial spin-labeling (ASL), blood oxygen level dependent (BOLD), MR elastography, chemical exchange saturation transfer (CEST), and hyperpolarization provide new information and will likely be integrated into daily clinical practice in the near future. Furthermore, there is great interest in the study of tumor heterogeneity as a prognostic factor and in relation to resistance to treatment, and this interest is leading to the application of new methods of analysis of multiparametric protocols. In parallel, new oncologic biomarkers that integrate the information from MR with clinical, laboratory, genetic, and histologic findings are being developed, thanks to the application of big data and artificial intelligence. This review analyzes different emerging MR techniques that are able to evaluate the physiological, metabolic, and mechanical characteristics of cancer, as well as the main clinical applications of these techniques. In addition, it summarizes the most novel methods of analysis of functional radiologic information in oncology. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. Semiclassical Path Integral Calculation of Nonlinear Optical Spectroscopy.

    PubMed

    Provazza, Justin; Segatta, Francesco; Garavelli, Marco; Coker, David F

    2018-02-13

    Computation of nonlinear optical response functions allows for an in-depth connection between theory and experiment. Experimentally recorded spectra provide a high density of information, but to objectively disentangle overlapping signals and to reach a detailed and reliable understanding of the system dynamics, measurements must be integrated with theoretical approaches. Here, we present a new, highly accurate and efficient trajectory-based semiclassical path integral method for computing higher order nonlinear optical response functions for non-Markovian open quantum systems. The approach is, in principle, applicable to general Hamiltonians and does not require any restrictions on the form of the intrasystem or system-bath couplings. This method is systematically improvable and is shown to be valid in parameter regimes where perturbation theory-based methods qualitatively breakdown. As a test of the methodology presented here, we study a system-bath model for a coupled dimer for which we compare against numerically exact results and standard approximate perturbation theory-based calculations. Additionally, we study a monomer with discrete vibronic states that serves as the starting point for future investigation of vibronic signatures in nonlinear electronic spectroscopy.

  16. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  17. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  18. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  19. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less

  20. Thermal Remote Sensing and the Thermodynamics of Ecosystem Development

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.; Rickman, Doug; Fraser, Roydon F.

    2011-01-01

    Ecosystems develop structure and function that degrades the quality of the incoming energy more effectively. The ecosystem T and Rn/K* and TRN are excellent candidates for indicators of ecological integrity. The potential for these methods to be used for remote sensed ecosystem classification and ecosystem health/integrity evaluation is apparent

  1. System Concept in Education. Professional Paper No. 20-74.

    ERIC Educational Resources Information Center

    Smith, Robert G., Jr.

    In its most general sense, a system is a group of components integrated to accomplish a purpose. The heart of an educational system is the instructional system. An instructional system is an integrated set of media, equipment, methods, and personnel performing efficiently those functions required to accomplish one or more learning objectives. An…

  2. Severity of Mental Health Impairment and Trajectories of Improvement in an Integrated Primary Care Clinic

    ERIC Educational Resources Information Center

    Bryan, Craig J.; Corso, Meghan L.; Corso, Kent A.; Morrow, Chad E.; Kanzler, Kathryn E.; Ray-Sannerud, Bobbie

    2012-01-01

    Objective: To model typical trajectories for improvement among patients treated in an integrated primary care behavioral health service, multilevel models were used to explore the relationship between baseline mental health impairment level and eventual mental health functioning across follow-up appointments. Method: Data from 495 primary care…

  3. Integration of Environmental Education and Environmental Law Enforcement for Police Officers

    ERIC Educational Resources Information Center

    Bovornkijprasert, Sravoot; Rawang, Wee

    2016-01-01

    The purpose of this research was to establish an integrated model of environmental education (EE) and environmental law enforcement (ELE) to improve the efficiency of functional competency for police officers in Bangkok Metropolitan Police Division 9 (MBP Div. 9). The research design was mixed methods of quantitative and qualitative approaches…

  4. In Defense of Silos: An Argument against the Integrative Undergraduate Business Curriculum

    ERIC Educational Resources Information Center

    Campbell, Noel D.; Heriot, Kirk C.; Finney, R. Zachary

    2006-01-01

    The literature urges business schools to change their undergraduate curricula in response to changes in the models and methods currently used by corporate America. Critics contend that business schools should place more emphasis on teamwork and integrative models. Business schools are urged to "break down the silos" between functional subjects by…

  5. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  6. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    NASA Astrophysics Data System (ADS)

    Song, Jong-Won; Hirao, Kimihiko

    2015-07-01

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.

  7. Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    2015-07-14

    We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less

  8. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  9. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  10. Design and ergonomics. Methods for integrating ergonomics at hand tool design stage.

    PubMed

    Marsot, Jacques; Claudon, Laurent

    2004-01-01

    As a marked increase in the number of musculoskeletal disorders was noted in many industrialized countries and more specifically in companies that require the use of hand tools, the French National Research and Safety Institute (INRS) launched in 1999 a research project on the topic of integrating ergonomics into hand tool design, and more particularly to a design of a boning knife. After a brief recall of the difficulties of integrating ergonomics at the design stage, the present paper shows how 3 design methodological tools--Functional Analysis, Quality Function Deployment and TRIZ--have been applied to the design of a boning knife. Implementation of these tools enabled us to demonstrate the extent to which they are capable of responding to the difficulties of integrating ergonomics into product design.

  11. A new method to calculate the beam charge for an integrating current transformer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Yuchi; Han Dan; Zhu Bin

    2012-09-15

    The integrating current transformer (ICT) is a magnetic sensor widely used to precisely measure the charge of an ultra-short-pulse charged particle beam generated by traditional accelerators and new laser-plasma particle accelerators. In this paper, we present a new method to calculate the beam charge in an ICT based on circuit analysis. The output transfer function shows an invariable signal profile for an ultra-short electron bunch, so the function can be used to evaluate the signal quality and calculate the beam charge through signal fitting. We obtain a set of parameters in the output function from a standard signal generated bymore » an ultra-short electron bunch (about 1 ps in duration) at a radio frequency linear electron accelerator at Tsinghua University. These parameters can be used to obtain the beam charge by signal fitting with excellent accuracy.« less

  12. Functional integral for non-Lagrangian systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    2010-02-01

    A functional integral formulation of quantum mechanics for non-Lagrangian systems is presented. The approach, which we call “stringy quantization,” is based solely on classical equations of motion and is free of any ambiguity arising from Lagrangian and/or Hamiltonian formulation of the theory. The functionality of the proposed method is demonstrated on several examples. Special attention is paid to the stringy quantization of systems with a general A-power friction force -κq˙A. Results for A=1 are compared with those obtained in the approaches by Caldirola-Kanai, Bateman, and Kostin. Relations to the Caldeira-Leggett model and to the Feynman-Vernon approach are discussed as well.

  13. Calculation of the Full Scattering Amplitude without Partial Wave Decomposition. 2; Inclusion of Exchange

    NASA Technical Reports Server (NTRS)

    Shertzer, Janine; Temkin, Aaron

    2004-01-01

    The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.

  14. Calculation of Scattering Amplitude Without Partial Analysis. II; Inclusion of Exchange

    NASA Technical Reports Server (NTRS)

    Temkin, Aaron; Shertzer, J.; Fisher, Richard R. (Technical Monitor)

    2002-01-01

    There was a method for calculating the whole scattering amplitude, f(Omega(sub k)), directly. The idea was to calculate the complete wave function Psi numerically, and use it in an integral expression for f, which can be reduced to a 2 dimensional quadrature. The original application was for e-H scattering without exchange. There the Schrodinger reduces a 2-d partial differential equation (pde), which was solved using the finite element method (FEM). Here we extend the method to the exchange approximation. The S.E. can be reduced to a pair of coupled pde's, which are again solved by the FEM. The formal expression for f(Omega(sub k)) consists two integrals, f+/- = f(sub d) +/- f(sub e); f(sub d) is formally the same integral as the no-exchange f. We have also succeeded in reducing f(sub e) to a 2-d integral. Results will be presented at the meeting.

  15. Valuing the visual impact of wind farms: A calculus method for synthesizing choice experiments studies.

    PubMed

    Wen, Cheng; Dallimer, Martin; Carver, Steve; Ziv, Guy

    2018-05-06

    Despite the great potential of mitigating carbon emission, development of wind farms is often opposed by local communities due to the visual impact on landscape. A growing number of studies have applied nonmarket valuation methods like Choice Experiments (CE) to value the visual impact by eliciting respondents' willingness to pay (WTP) or willingness to accept (WTA) for hypothetical wind farms through survey questions. Several meta-analyses have been found in the literature to synthesize results from different valuation studies, but they have various limitations related to the use of the prevailing multivariate meta-regression analysis. In this paper, we propose a new meta-analysis method to establish general functions for the relationships between the estimated WTP or WTA and three wind farm attributes, namely the distance to residential/coastal areas, the number of turbines and turbine height. This method involves establishing WTA or WTP functions for individual studies, fitting the average derivative functions and deriving the general integral functions of WTP or WTA against wind farm attributes. Results indicate that respondents in different studies consistently showed increasing WTP for moving wind farms to greater distances, which can be fitted by non-linear (natural logarithm) functions. However, divergent preferences for the number of turbines and turbine height were found in different studies. We argue that the new analysis method proposed in this paper is an alternative to the mainstream multivariate meta-regression analysis for synthesizing CE studies and the general integral functions of WTP or WTA against wind farm attributes are useful for future spatial modelling and benefit transfer studies. We also suggest that future multivariate meta-analyses should include non-linear components in the regression functions. Copyright © 2018. Published by Elsevier B.V.

  16. Control and Diagnosis in Integrated Product Development - Observations during the Development of an AGV

    NASA Astrophysics Data System (ADS)

    Stetter, R.; Simundsson, A.

    2015-11-01

    This paper is concerned with the integration of control and diagnosis functionalities into the development of complete systems which include mechanical, electrical and electronic subsystems. For the development of such systems the strategies, methods and tools of integrated product development have attracted significant attention during the last decades. Today, it is generally observed that product development processes of complex systems can only be successful if the activities in the different domains are well connected and synchronised and if an ongoing communication is present - an ongoing communication spanning the technical domains and also including functions such as production planning, marketing/distribution, quality assurance, service and project planning. Obviously, numerous approaches to tackle this challenge are present in scientific literature and in industrial practice, as well. Today, the functionality and safety of most products is to a large degree dependent on control and diagnosis functionalities. Still, there is comparatively little research concentrating on the integration of the development of these functionalities into the overall product development processes. The main source of insight of the presented research is the product development process of an Automated Guided Vehicle (AGV) which is intended to be used on rough terrain. The paper starts with a background describing Integrated Product Development. The second section deals with the product development of the sample product. The third part summarizes some insights and formulates first hypotheses concerning control and diagnosis in Integrated Product Development.

  17. Generalized emission functions for photon emission from quark-gluon plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, S. V.

    The Landau-Pomeranchuk-Migdal effects on photon emission from the quark-gluon plasma have been studied as a function of photon mass, at a fixed temperature of the plasma. The integral equations for the transverse vector function [f-tilde)(p-tilde){sub (perpendicular)})] and the longitudinal function [g-tilde)(p-tilde){sub (perpendicular)})] consisting of multiple scattering effects are solved by the self-consistent iterations method and also by the variational method for the variable set {l_brace}p{sub 0},q{sub 0},Q{sup 2}{r_brace}. We considered the bremsstrahlung and the off shell annihilation (aws) processes. We define two new dynamical scaling variables, x{sub T},x{sub L}, for bremsstrahlung and aws processes which are functions of variables p{submore » 0},q{sub 0},Q{sup 2}. We define four new emission functions for massive photon emission represented by g{sub T}{sup b},g{sub T}{sup a},g{sub L}{sup b},g{sub L}{sup a} and we constructed these using the exact numerical solutions of the integral equations. These four emission functions have been parametrized by suitable simple empirical fits. Using the empirical emission functions, we calculated the imaginary part of the photon polarization tensor as a function of photon mass and energy.« less

  18. The integration of quality function deployment and Kansei Engineering: An overview of application

    NASA Astrophysics Data System (ADS)

    Lokman, Anitawati Mohd; Awang, Ahmad Azran; Omar, Abdul Rahman; Abdullah, Nur Atiqah Sia

    2016-02-01

    As a result of today's globalized world and robust development of emerging markets, consumers are able to select from an endless number of products that are mostly similar in terms of design and properties, as well as equivalent in function and performance. The survival of businesses in a competitive ambience requires innovation, consumer loyalty, and products that are easily identifiable by consumers. Today's manufacturers have started to employ customer research instruments to survive in the highly industrialized world—for example, Conjoint Analysis, Design of Experiments and Semantic Design of Environment. However, this work only attempts to concentrate on Kansei Engineering and Quality Function Deployment. Kansei Engineering (KE) is deemed as the most appropriate method to link consumers' feelings, emotions or senses to the properties of a product because it translates people's impressions, interests, and feelings to the solutions of product design. Likewise, Quality Function Deployment (QFD) enables clearer interpretation of the needs of consumers, better concepts or products, and enhanced communication to internal operations that must then manufacture and deliver the product or services. The integration of both KE and QFD is believed possible, as many product manufacturers and businesses have started to utilize systematized methods to translate consumers' needs and wants into processes and products. Therefore, this work addresses areas of various integrations of KE and QFD processes in the industry, in an effort to assist an integration of KE and QFD. This work aims to provide evidence on the integration mechanism to enable successful incorporation of consumer's implicit feelings and demands into product quality improvement, and simultaneously providing an overview of both KE and QFD from the perspective of a novice.

  19. Simulation of random road microprofile based on specified correlation function

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.

    2018-03-01

    The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.

  20. VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)

    NASA Astrophysics Data System (ADS)

    Yang, X.-L.; Wang, J.-C.

    2013-11-01

    Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).

  1. New faces of porous Prussian blue: interfacial assembly of integrated hetero-structures for sensing applications.

    PubMed

    Kong, Biao; Selomulya, Cordelia; Zheng, Gengfeng; Zhao, Dongyuan

    2015-11-21

    Prussian blue (PB), the oldest synthetic coordination compound, is a classic and fascinating transition metal coordination material. Prussian blue is based on a three-dimensional (3-D) cubic polymeric porous network consisting of alternating ferric and ferrous ions, which provides facile assembly as well as precise interaction with active sites at functional interfaces. A fundamental understanding of the assembly mechanism of PB hetero-interfaces is essential to enable the full potential applications of PB crystals, including chemical sensing, catalysis, gas storage, drug delivery and electronic displays. Developing controlled assembly methods towards functionally integrated hetero-interfaces with adjustable sizes and morphology of PB crystals is necessary. A key point in the functional interface and device integration of PB nanocrystals is the fabrication of hetero-interfaces in a well-defined and oriented fashion on given substrates. This review will bring together these key aspects of the hetero-interfaces of PB nanocrystals, ranging from structure and properties, interfacial assembly strategies, to integrated hetero-structures for diverse sensing.

  2. An integrated miRNA functional screening and target validation method for organ morphogenesis.

    PubMed

    Rebustini, Ivan T; Vlahos, Maryann; Packer, Trevor; Kukuruzinska, Maria A; Maas, Richard L

    2016-03-16

    The relative ease of identifying microRNAs and their increasing recognition as important regulators of organogenesis motivate the development of methods to efficiently assess microRNA function during organ morphogenesis. In this context, embryonic organ explants provide a reliable and reproducible system that recapitulates some of the important early morphogenetic processes during organ development. Here we present a method to target microRNA function in explanted mouse embryonic organs. Our method combines the use of peptide-based nanoparticles to transfect specific microRNA inhibitors or activators into embryonic organ explants, with a microRNA pulldown assay that allows direct identification of microRNA targets. This method provides effective assessment of microRNA function during organ morphogenesis, allows prioritization of multiple microRNAs in parallel for subsequent genetic approaches, and can be applied to a variety of embryonic organs.

  3. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  4. Solution of Volterra and Fredholm Classes of Equations via Triangular Orthogonal Function (A Combination of Right Hand Triangular Function and Left Hand Triangular Function) and Hybrid Orthogonal Function (A Combination of Sample Hold Function and Right Hand Triangular Function)

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep

    2018-04-01

    In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.

  5. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    PubMed

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  6. Hierarchical Ensemble Methods for Protein Function Prediction

    PubMed Central

    2014-01-01

    Protein function prediction is a complex multiclass multilabel classification problem, characterized by multiple issues such as the incompleteness of the available annotations, the integration of multiple sources of high dimensional biomolecular data, the unbalance of several functional classes, and the difficulty of univocally determining negative examples. Moreover, the hierarchical relationships between functional classes that characterize both the Gene Ontology and FunCat taxonomies motivate the development of hierarchy-aware prediction methods that showed significantly better performances than hierarchical-unaware “flat” prediction methods. In this paper, we provide a comprehensive review of hierarchical methods for protein function prediction based on ensembles of learning machines. According to this general approach, a separate learning machine is trained to learn a specific functional term and then the resulting predictions are assembled in a “consensus” ensemble decision, taking into account the hierarchical relationships between classes. The main hierarchical ensemble methods proposed in the literature are discussed in the context of existing computational methods for protein function prediction, highlighting their characteristics, advantages, and limitations. Open problems of this exciting research area of computational biology are finally considered, outlining novel perspectives for future research. PMID:25937954

  7. Integrability: mathematical methods for studying solitary waves theory

    NASA Astrophysics Data System (ADS)

    Wazwaz, Abdul-Majid

    2014-03-01

    In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the real features in a variety of vital areas in science, technology and engineering. In recognition of the importance of solitary waves theory and the underlying concept of integrable equations, a variety of powerful methods have been developed to carry out the required analysis. Examples of such methods which have been advanced are the inverse scattering method, the Hirota bilinear method, the simplified Hirota method, the Bäcklund transformation method, the Darboux transformation, the Pfaffian technique, the Painlevé analysis, the generalized symmetry method, the subsidiary ordinary differential equation method, the coupled amplitude-phase formulation, the sine-cosine method, the sech-tanh method, the mapping and deformation approach and many new other methods. The inverse scattering method, viewed as a nonlinear analogue of the Fourier transform method, is a powerful approach that demonstrates the existence of soliton solutions through intensive computations. At the center of the theory of integrable equations lies the bilinear forms and Hirota's direct method, which can be used to obtain soliton solutions by using exponentials. The Bäcklund transformation method is a useful invariant transformation that transforms one solution into another of a differential equation. The Darboux transformation method is a well known tool in the theory of integrable systems. It is believed that there is a connection between the Bäcklund transformation and the Darboux transformation, but it is as yet not known. Archetypes of integrable equations are the Korteweg-de Vries (KdV) equation, the modified KdV equation, the sine-Gordon equation, the Schrödinger equation, the Vakhnenko equation, the KdV6 equation, the Burgers equation, the fifth-order Lax equation and many others. These equations yield soliton solutions, multiple soliton solutions, breather solutions, quasi-periodic solutions, kink solutions, homo-clinic solutions and other solutions as well. The couplings of linear and nonlinear equations were recently discovered and subsequently received considerable attention. The concept of couplings forms a new direction for developing innovative construction methods. The recently obtained results in solitary waves theory highlight new approaches for additional creative ideas, promising further achievements and increased progress in this field. We are grateful to all of the authors who accepted our invitation to contribute to this comment section.

  8. Dual number algebra method for Green's function derivatives in 3D magneto-electro-elasticity

    NASA Astrophysics Data System (ADS)

    Dziatkiewicz, Grzegorz

    2018-01-01

    The Green functions are the basic elements of the boundary element method. To obtain the boundary integral formulation the Green function and its derivative should be known for the considered differential operator. Today the interesting group of materials are electronic composites. The special case of the electronic composite is the magnetoelectroelastic continuum. The mentioned continuum is a model of the piezoelectric-piezomagnetic composites. The anisotropy of their physical properties makes the problem of Green's function determination very difficult. For that reason Green's functions for the magnetoelectroelastic continuum are not known in the closed form and numerical methods should be applied to determine such Green's functions. These means that the problem of the accurate and simply determination of Green's function derivatives is even harder. Therefore in the present work the dual number algebra method is applied to calculate numerically the derivatives of 3D Green's functions for the magnetoelectroelastic materials. The introduced method is independent on the step size and it can be treated as a special case of the automatic differentiation method. Therefore, the dual number algebra method can be applied as a tool for checking the accuracy of the well-known finite difference schemes.

  9. Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.

    PubMed

    Gao, Duyang; Yuan, Zhen

    2017-01-01

    Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.

  10. A practical method to avoid zero-point leak in molecular dynamics calculations: application to the water dimer.

    PubMed

    Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M

    2010-04-28

    We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.

  11. Accelerating functional verification of an integrated circuit

    DOEpatents

    Deindl, Michael; Ruedinger, Jeffrey Joseph; Zoellin, Christian G.

    2015-10-27

    Illustrative embodiments include a method, system, and computer program product for accelerating functional verification in simulation testing of an integrated circuit (IC). Using a processor and a memory, a serial operation is replaced with a direct register access operation, wherein the serial operation is configured to perform bit shifting operation using a register in a simulation of the IC. The serial operation is blocked from manipulating the register in the simulation of the IC. Using the register in the simulation of the IC, the direct register access operation is performed in place of the serial operation.

  12. Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    NASA Technical Reports Server (NTRS)

    Fink, P. W.; Khayat, M. A.; Wilton, D. R.

    2005-01-01

    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases.

  13. Unmanned aircraft system sense and avoid integrity and continuity

    NASA Astrophysics Data System (ADS)

    Jamoom, Michael B.

    This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.

  14. Profiles and Cognitive Predictors of Motor Functions among Early School-Age Children with Mild Intellectual Disabilities

    ERIC Educational Resources Information Center

    Wuang, Y.-P.; Wang, C.-C.; Huang, M.-H.; Su, C.-Y.

    2008-01-01

    Background: The purpose of the study was to describe sensorimotor profile in children with mild intellectual disability (ID), and to examine the association between cognitive and motor function. Methods: A total of 233 children with mild ID aged 7 to 8 years were evaluated with measures of cognitive, motor and sensory integrative functioning.…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko

    Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.

  16. Finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems

    NASA Astrophysics Data System (ADS)

    Xie, Xue-Jun; Zhang, Xing-Hui; Zhang, Kemei

    2016-07-01

    This paper studies the finite-time state feedback stabilisation of stochastic high-order nonlinear feedforward systems. Based on the stochastic Lyapunov theorem on finite-time stability, by using the homogeneous domination method, the adding one power integrator and sign function method, constructing a ? Lyapunov function and verifying the existence and uniqueness of solution, a continuous state feedback controller is designed to guarantee the closed-loop system finite-time stable in probability.

  17. Formal Verification of Complex Systems based on SysML Functional Requirements

    DTIC Science & Technology

    2014-12-23

    Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools

  18. Cauchy-Jost function and hierarchy of integrable equations

    NASA Astrophysics Data System (ADS)

    Boiti, M.; Pempinelli, F.; Pogrebkov, A. K.

    2015-11-01

    We describe the properties of the Cauchy-Jost (also known as Cauchy-Baker-Akhiezer) function of the Kadomtsev-Petviashvili-II equation. Using the bar partial -method, we show that for this function, all equations of the Kadomtsev-Petviashvili-II hierarchy are given in a compact and explicit form, including equations for the Cauchy-Jost function itself, time evolutions of the Jost solutions, and evolutions of the potential of the heat equation.

  19. A MATLAB-Aided Method for Teaching Calculus-Based Business Mathematics

    ERIC Educational Resources Information Center

    Liang, Jiajuan; Pan, William S. Y.

    2009-01-01

    MATLAB is a powerful package for numerical computation. MATLAB contains a rich pool of mathematical functions and provides flexible plotting functions for illustrating mathematical solutions. The course of calculus-based business mathematics consists of two major topics: 1) derivative and its applications in business; and 2) integration and its…

  20. Analysis of methods. [information systems evolution environment

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.

    1991-01-01

    Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity Relationship, Data Flow Diagrams, and Structure Charts, for inclusion in an integrated development support environment.

  1. A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for high Reynolds number laminar flows

    NASA Technical Reports Server (NTRS)

    Kim, Sang-Wook

    1988-01-01

    A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables were interpolated using complete quadratic shape functions and the pressure was interpolated using linear shape functions. For the two dimensional case, the pressure is defined on a triangular element which is contained inside the complete biquadratic element for velocity variables; and for the three dimensional case, the pressure is defined on a tetrahedral element which is again contained inside the complete tri-quadratic element. Thus the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow for Reynolds number of 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorable with those of the finite difference methods as well as experimental data available. A finite elememt computer program for incompressible, laminar flows is presented.

  2. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  3. Robust Functionalization of Large Microelectrode Arrays by Using Pulsed Potentiostatic Deposition

    PubMed Central

    Rothe, Joerg; Frey, Olivier; Madangopal, Rajtarun; Rickus, Jenna; Hierlemann, Andreas

    2016-01-01

    Surface modification of microelectrodes is a central step in the development of microsensors and microsensor arrays. Here, we present an electrodeposition scheme based on voltage pulses. Key features of this method are uniformity in the deposited electrode coatings, flexibility in the overall deposition area, i.e., the sizes and number of the electrodes to be coated, and precise control of the surface texture. Deposition and characterization of four different materials are demonstrated, including layers of high-surface-area platinum, gold, conducting polymer poly(ethylenedioxythiophene), also known as PEDOT, and the non-conducting polymer poly(phenylenediamine), also known as PPD. The depositions were conducted using a fully integrated complementary metal-oxide-semiconductor (CMOS) chip with an array of 1024 microelectrodes. The pulsed potentiostatic deposition scheme is particularly suitable for functionalization of individual electrodes or electrode subsets of large integrated microelectrode arrays: the required deposition waveforms are readily available in an integrated system, the same deposition parameters can be used to functionalize the surface of either single electrodes or large arrays of thousands of electrodes, and the deposition method proved to be robust and reproducible for all materials tested. PMID:28025569

  4. A FRAMEWORK FOR ATTRIBUTE-BASED COMMUNITY DETECTION WITH APPLICATIONS TO INTEGRATED FUNCTIONAL GENOMICS.

    PubMed

    Yu, Han; Hageman Blair, Rachael

    2016-01-01

    Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.

  5. Nodal Green’s Function Method Singular Source Term and Burnable Poison Treatment in Hexagonal Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A.A. Bingham; R.M. Ferrer; A.M. ougouag

    2009-09-01

    An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less

  6. Metamethod study of qualitative psychotherapy research on clients' experiences: Review and recommendations.

    PubMed

    Levitt, Heidi M; Pomerville, Andrew; Surace, Francisco I; Grabowski, Lauren M

    2017-11-01

    A metamethod study is a qualitative meta-analysis focused upon the methods and procedures used in a given research domain. These studies are rare in psychological research. They permit both the documentation of the informal standards within a field of research and recommendations for future work in that area. This paper presents a metamethod analysis of a substantial body of qualitative research that focused on clients' experiences in psychotherapy (109 studies). This review examined the ways that methodological integrity has been established across qualitative research methods. It identified the numbers of participants recruited and the form of data collection used (e.g., semistructured interviews, diaries). As well, it examined the types of checks employed to increase methodological integrity, such as participant counts, saturation, reflexivity techniques, participant feedback, or consensus and auditing processes. Central findings indicated that the researchers quite flexibly integrated procedures associated with one method into studies using other methods in order to strengthen their rigor. It appeared normative to adjust procedures to advance methodological integrity. These findings encourage manuscript reviewers to assess the function of procedures within a study rather than to require researchers to adhere to the set of procedures associated with a method. In addition, when epistemological approaches were mentioned they were overwhelmingly constructivist in nature, despite the increasing use of procedures traditionally associated with objectivist perspectives. It is recommended that future researchers do more to explicitly describe the functions of their procedures so that they are coherently situated within the epistemological approaches in use. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  8. Integrating Kano’s Model into Quality Function Deployment for Product Design: A Comprehensive Review

    NASA Astrophysics Data System (ADS)

    Ginting, Rosnani; Hidayati, Juliza; Siregar, Ikhsan

    2018-03-01

    Many methods and techniques are adopted by some companies to improve the competitiveness through the fulfillment of customer satisfaction by enhancement and improvement the product design quality. Over the past few years, several researcher have studied extensively combining Quality Function Deployment and Kano’s model as design techniques by focusing on translating consumer desires into a product design. This paper presents a review and analysis of several literatures that associated to the integration methodology of Kano into the QFD process. Various of international journal articles were selected, collected and analyzed through a number of relevant scientific publications. In-depth analysis was performed, and focused in this paper on the results, advantages and drawbacks of its methodology. In addition, this paper also provides the analysis that acquired in this study related to the development of the methodology. It is hopedd this paper can be a reference for other researchers and manufacturing companies to implement the integration method of QFD- Kano for product design.

  9. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  10. Bio-hybrid cell-based actuators for microsystems.

    PubMed

    Carlsen, Rika Wright; Sitti, Metin

    2014-10-15

    As we move towards the miniaturization of devices to perform tasks at the nano and microscale, it has become increasingly important to develop new methods for actuation, sensing, and control. Over the past decade, bio-hybrid methods have been investigated as a promising new approach to overcome the challenges of scaling down robotic and other functional devices. These methods integrate biological cells with artificial components and therefore, can take advantage of the intrinsic actuation and sensing functionalities of biological cells. Here, the recent advancements in bio-hybrid actuation are reviewed, and the challenges associated with the design, fabrication, and control of bio-hybrid microsystems are discussed. As a case study, focus is put on the development of bacteria-driven microswimmers, which has been investigated as a targeted drug delivery carrier. Finally, a future outlook for the development of these systems is provided. The continued integration of biological and artificial components is envisioned to enable the performance of tasks at a smaller and smaller scale in the future, leading to the parallel and distributed operation of functional systems at the microscale. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Integrated monitoring and assessment of soil restoration treatments in the Lake Tahoe Basin.

    PubMed

    Grismer, M E; Schnurrenberger, C; Arst, R; Hogan, M P

    2009-03-01

    Revegetation and soil restoration efforts, often associated with erosion control measures on disturbed soils, are rarely monitored or otherwise evaluated in terms of improved hydrologic, much less, ecologic function and longer term sustainability. As in many watersheds, sediment is a key parameter of concern in the Tahoe Basin, particularly fine sediments less than about ten microns. Numerous erosion control measures deployed in the Basin during the past several decades have under-performed, or simply failed after a few years and new soil restoration methods of erosion control are under investigation. We outline a comprehensive, integrated field-based evaluation and assessment of the hydrologic function associated with these soil restoration methods with the hypothesis that restoration of sustainable function will result in longer term erosion control benefits than that currently achieved with more commonly used surface treatment methods (e.g. straw/mulch covers and hydroseeding). The monitoring includes cover-point and ocular assessments of plant cover, species type and diversity; soil sampling for nutrient status; rainfall simulation measurement of infiltration and runoff rates; cone penetrometer measurements of soil compaction and thickness of mulch layer depths. Through multi-year hydrologic and vegetation monitoring at ten sites and 120 plots, we illustrate the results obtained from the integrated monitoring program and describe how it might guide future restoration efforts and monitoring assessments.

  12. Bionic Nanosystems

    NASA Astrophysics Data System (ADS)

    Sebastian Mannoor, Manu

    Direct multidimensional integration of functional electronics and mechanical elements with viable biological systems could allow for the creation of bionic systems and devices possessing unique and advanced capabilities. For example, the ability to three dimensionally integrate functional electronic and mechanical components with biological cells and tissue could enable the creation of bionic systems that can have tremendous impact in regenerative medicine, prosthetics, and human-machine interfaces. However, as a consequence of the inherent dichotomy in material properties and limitations of conventional fabrication methods, the attainment of truly seamless integration of electronic and/or mechanical components with biological systems has been challenging. Nanomaterials engineering offers a general route for overcoming these dichotomies, primarily due to the existence of a dimensional compatibility between fundamental biological functional units and abiotic nanomaterial building blocks. One area of compelling interest for bionic systems is in the field of biomedical sensing, where the direct interfacing of nanosensors onto biological tissue or the human body could stimulate exciting opportunities such as on-body health quality monitoring and adaptive threat detection. Further, interfacing of antimicrobial peptide based bioselective probes onto the bionic nanosensors could offer abilities to detect pathogenic bacteria with bio-inspired selectivity. Most compellingly, when paired with additive manufacturing techniques such as 3D printing, these characteristics enable three dimensional integration and merging of a variety of functional materials including electronic, structural and biomaterials with viable biological cells, in the precise anatomic geometries of human organs, to form three dimensionally integrated, multi-functional bionic hybrids and cyborg devices with unique capabilities. In this thesis, we illustrate these approaches using three representative bionic systems: 1) Bionic Nanosensors: featuring bio-integrated graphene nanosensors for ubiquitous sensing, 2) Bionic Organs: featuring 3D printed bionic ears with three dimensionally integrated electronics and 3) Bionic Leaves: describing ongoing work in the direction of the creation of a bionic leaf enabled by the integration of plant derived photosynthetic functional units with electronic materials and components into a leaf-shaped hierarchical structure for harvesting photosynthetic bioelectricity.

  13. Numerical solution of the quantum Lenard-Balescu equation for a non-degenerate one-component plasma

    DOE PAGES

    Scullard, Christian R.; Belt, Andrew P.; Fennell, Susan C.; ...

    2016-09-01

    We present a numerical solution of the quantum Lenard-Balescu equation using a spectral method, namely an expansion in Laguerre polynomials. This method exactly conserves both particles and kinetic energy and facilitates the integration over the dielectric function. To demonstrate the method, we solve the equilibration problem for a spatially homogeneous one-component plasma with various initial conditions. Unlike the more usual Landau/Fokker-Planck system, this method requires no input Coulomb logarithm; the logarithmic terms in the collision integral arise naturally from the equation along with the non-logarithmic order-unity terms. The spectral method can also be used to solve the Landau equation andmore » a quantum version of the Landau equation in which the integration over the wavenumber requires only a lower cutoff. We solve these problems as well and compare them with the full Lenard-Balescu solution in the weak-coupling limit. Finally, we discuss the possible generalization of this method to include spatial inhomogeneity and velocity anisotropy.« less

  14. Dielectric function of a model insulator

    NASA Astrophysics Data System (ADS)

    Rezvani, G. A.; Friauf, Robert J.

    1993-04-01

    We have calculated a dielectric response function ɛ(q,ω) using the random-phase approximation for a model insulator originally proposed by Fry [Phys. Rev. 179, 892 (1969)]. We treat narrow and wide band-gap insulators for the purpose of using results in the simulation of secondary-electron emission from insulators. Therefore, it is important to take into account the contribution of the first and second conduction bands. For the real part of the dielectric function we perform a numerical principal value integration over the first and second Brillouin zone. For the imaginary part we perform a numerical integration involving the δ function that results from the conservation of energy. In order to check the validity of our numerical integration methods we perform a Kramers-Kronig transform of the real part and compare it with the directly calculated imaginary part and vice versa. We discuss fitting the model to the static dielectric constant and the f-sum rule. Then we display the wave number and frequency dependence for solid argon, KCl, and model Si.

  15. Accurate evaluation and analysis of functional genomics data and methods

    PubMed Central

    Greene, Casey S.; Troyanskaya, Olga G.

    2016-01-01

    The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703

  16. Integrating protein structural dynamics and evolutionary analysis with Bio3D.

    PubMed

    Skjærven, Lars; Yao, Xin-Qiu; Scarabelli, Guido; Grant, Barry J

    2014-12-10

    Popular bioinformatics approaches for studying protein functional dynamics include comparisons of crystallographic structures, molecular dynamics simulations and normal mode analysis. However, determining how observed displacements and predicted motions from these traditionally separate analyses relate to each other, as well as to the evolution of sequence, structure and function within large protein families, remains a considerable challenge. This is in part due to the general lack of tools that integrate information of molecular structure, dynamics and evolution. Here, we describe the integration of new methodologies for evolutionary sequence, structure and simulation analysis into the Bio3D package. This major update includes unique high-throughput normal mode analysis for examining and contrasting the dynamics of related proteins with non-identical sequences and structures, as well as new methods for quantifying dynamical couplings and their residue-wise dissection from correlation network analysis. These new methodologies are integrated with major biomolecular databases as well as established methods for evolutionary sequence and comparative structural analysis. New functionality for directly comparing results derived from normal modes, molecular dynamics and principal component analysis of heterogeneous experimental structure distributions is also included. We demonstrate these integrated capabilities with example applications to dihydrofolate reductase and heterotrimeric G-protein families along with a discussion of the mechanistic insight provided in each case. The integration of structural dynamics and evolutionary analysis in Bio3D enables researchers to go beyond a prediction of single protein dynamics to investigate dynamical features across large protein families. The Bio3D package is distributed with full source code and extensive documentation as a platform independent R package under a GPL2 license from http://thegrantlab.org/bio3d/ .

  17. Algebraic-geometry approach to integrability of birational plane mappings. Integrable birational quadratic reversible mappings. I

    NASA Astrophysics Data System (ADS)

    Rerikh, K. V.

    1998-02-01

    Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.

  18. Integrated cellular network of transcription regulations and protein-protein interactions

    PubMed Central

    2010-01-01

    Background With the accumulation of increasing omics data, a key goal of systems biology is to construct networks at different cellular levels to investigate cellular machinery of the cell. However, there is currently no satisfactory method to construct an integrated cellular network that combines the gene regulatory network and the signaling regulatory pathway. Results In this study, we integrated different kinds of omics data and developed a systematic method to construct the integrated cellular network based on coupling dynamic models and statistical assessments. The proposed method was applied to S. cerevisiae stress responses, elucidating the stress response mechanism of the yeast. From the resulting integrated cellular network under hyperosmotic stress, the highly connected hubs which are functionally relevant to the stress response were identified. Beyond hyperosmotic stress, the integrated network under heat shock and oxidative stress were also constructed and the crosstalks of these networks were analyzed, specifying the significance of some transcription factors to serve as the decision-making devices at the center of the bow-tie structure and the crucial role for rapid adaptation scheme to respond to stress. In addition, the predictive power of the proposed method was also demonstrated. Conclusions We successfully construct the integrated cellular network which is validated by literature evidences. The integration of transcription regulations and protein-protein interactions gives more insight into the actual biological network and is more predictive than those without integration. The method is shown to be powerful and flexible and can be used under different conditions and for different species. The coupling dynamic models of the whole integrated cellular network are very useful for theoretical analyses and for further experiments in the fields of network biology and synthetic biology. PMID:20211003

  19. Integrated cellular network of transcription regulations and protein-protein interactions.

    PubMed

    Wang, Yu-Chao; Chen, Bor-Sen

    2010-03-08

    With the accumulation of increasing omics data, a key goal of systems biology is to construct networks at different cellular levels to investigate cellular machinery of the cell. However, there is currently no satisfactory method to construct an integrated cellular network that combines the gene regulatory network and the signaling regulatory pathway. In this study, we integrated different kinds of omics data and developed a systematic method to construct the integrated cellular network based on coupling dynamic models and statistical assessments. The proposed method was applied to S. cerevisiae stress responses, elucidating the stress response mechanism of the yeast. From the resulting integrated cellular network under hyperosmotic stress, the highly connected hubs which are functionally relevant to the stress response were identified. Beyond hyperosmotic stress, the integrated network under heat shock and oxidative stress were also constructed and the crosstalks of these networks were analyzed, specifying the significance of some transcription factors to serve as the decision-making devices at the center of the bow-tie structure and the crucial role for rapid adaptation scheme to respond to stress. In addition, the predictive power of the proposed method was also demonstrated. We successfully construct the integrated cellular network which is validated by literature evidences. The integration of transcription regulations and protein-protein interactions gives more insight into the actual biological network and is more predictive than those without integration. The method is shown to be powerful and flexible and can be used under different conditions and for different species. The coupling dynamic models of the whole integrated cellular network are very useful for theoretical analyses and for further experiments in the fields of network biology and synthetic biology.

  20. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  1. A high-order boundary integral method for surface diffusions on elastically stressed axisymmetric rods.

    PubMed

    Li, Xiaofan; Nie, Qing

    2009-07-01

    Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.

  2. The Intelligent Method of Learning

    ERIC Educational Resources Information Center

    Moula, Alireza; Mohseni, Simin; Starrin, Bengt; Scherp, Hans Ake; Puddephatt, Antony J.

    2010-01-01

    Early psychologist William James [1842-1910] and philosopher John Dewey [1859-1952] described intelligence as a method which can be learned. That view of education is integrated with knowledge about the brain's executive functions to empower pupils to intelligently organize their learning. This article links the pragmatist philosophy of…

  3. ISYMOD: a knowledge warehouse for the identification, assembly and analysis of bacterial integrated systems.

    PubMed

    Chabalier, Julie; Capponi, Cécile; Quentin, Yves; Fichant, Gwennaele

    2005-04-01

    Complex biological functions emerge from interactions between proteins in stable supra-molecular assemblies and/or through transitory contacts. Most of the time protein partners of the assemblies are composed of one or several domains which exhibit different biochemical functions. Thus the study of cellular process requires the identification of different functional units and their integration in an interaction network; such complexes are referred to as integrated systems. In order to exploit with optimum efficiency the increased release of data, automated bioinformatics strategies are needed to identify, reconstruct and model such systems. For that purpose, we have developed a knowledge warehouse dedicated to the representation and acquisition of bacterial integrated systems involved in the exchange of the bacterial cell with its environment. ISYMOD is a knowledge warehouse that consistently integrates in the same environment the data and the methods used for their acquisition. This is achieved through the construction of (1) a domain knowledge base (DKB) devoted to the storage of the knowledge about the systems, their functional specificities, their partners and how they are related and (2) a methodological knowledge base (MKB) which depicts the task layout used to identify and reconstruct functional integrated systems. Instantiation of the DKB is obtained by solving the tasks of the MKB, whereas some tasks need instances of the DKB to be solved. AROM, an object-based knowledge representation system, has been used to design the DKB, and its task manager, AROMTasks, for developing the MKB. In this study two integrated systems, ABC transporters and two component systems, both involved in adaptation processes of a bacterial cell to its biotope, have been used to evaluate the feasibility of the approach.

  4. Squared eigenfunctions for the Sasa-Satsuma equation

    NASA Astrophysics Data System (ADS)

    Yang, Jianke; Kaup, D. J.

    2009-02-01

    Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.

  5. Two Legendre-Dual-Petrov-Galerkin Algorithms for Solving the Integrated Forms of High Odd-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, Waleed M.; Doha, Eid H.; Bassuony, Mahmoud A.

    2014-01-01

    Two numerical algorithms based on dual-Petrov-Galerkin method are developed for solving the integrated forms of high odd-order boundary value problems (BVPs) governed by homogeneous and nonhomogeneous boundary conditions. Two different choices of trial functions and test functions which satisfy the underlying boundary conditions of the differential equations and the dual boundary conditions are used for this purpose. These choices lead to linear systems with specially structured matrices that can be efficiently inverted, hence greatly reducing the cost. The various matrix systems resulting from these discretizations are carefully investigated, especially their complexities and their condition numbers. Numerical results are given to illustrate the efficiency of the proposed algorithms, and some comparisons with some other methods are made. PMID:24616620

  6. Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.

    2017-10-01

    The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.

  7. Connection between the regular approximation and the normalized elimination of the small component in relativistic quantum theory

    NASA Astrophysics Data System (ADS)

    Filatov, Michael; Cremer, Dieter

    2005-02-01

    The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.

  8. Pdf - Transport equations for chemically reacting flows

    NASA Technical Reports Server (NTRS)

    Kollmann, W.

    1989-01-01

    The closure problem for the transport equations for pdf and the characteristic functions of turbulent, chemically reacting flows is addressed. The properties of the linear and closed equations for the characteristic functional for Eulerian and Lagrangian variables are established, and the closure problem for the finite-dimensional case is discussed for pdf and characteristic functions. It is shown that the closure for the scalar dissipation term in the pdf equation developed by Dopazo (1979) and Kollmann et al. (1982) results in a single integral, in contrast to the pdf, where double integration is required. Some recent results using pdf methods obtained for turbulent flows with combustion, including effects of chemical nonequilibrium, are discussed.

  9. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  10. Functional region prediction with a set of appropriate homologous sequences-an index for sequence selection by integrating structure and sequence information with spatial statistics

    PubMed Central

    2012-01-01

    Background The detection of conserved residue clusters on a protein structure is one of the effective strategies for the prediction of functional protein regions. Various methods, such as Evolutionary Trace, have been developed based on this strategy. In such approaches, the conserved residues are identified through comparisons of homologous amino acid sequences. Therefore, the selection of homologous sequences is a critical step. It is empirically known that a certain degree of sequence divergence in the set of homologous sequences is required for the identification of conserved residues. However, the development of a method to select homologous sequences appropriate for the identification of conserved residues has not been sufficiently addressed. An objective and general method to select appropriate homologous sequences is desired for the efficient prediction of functional regions. Results We have developed a novel index to select the sequences appropriate for the identification of conserved residues, and implemented the index within our method to predict the functional regions of a protein. The implementation of the index improved the performance of the functional region prediction. The index represents the degree of conserved residue clustering on the tertiary structure of the protein. For this purpose, the structure and sequence information were integrated within the index by the application of spatial statistics. Spatial statistics is a field of statistics in which not only the attributes but also the geometrical coordinates of the data are considered simultaneously. Higher degrees of clustering generate larger index scores. We adopted the set of homologous sequences with the highest index score, under the assumption that the best prediction accuracy is obtained when the degree of clustering is the maximum. The set of sequences selected by the index led to higher functional region prediction performance than the sets of sequences selected by other sequence-based methods. Conclusions Appropriate homologous sequences are selected automatically and objectively by the index. Such sequence selection improved the performance of functional region prediction. As far as we know, this is the first approach in which spatial statistics have been applied to protein analyses. Such integration of structure and sequence information would be useful for other bioinformatics problems. PMID:22643026

  11. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  12. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology.

    PubMed

    Price, S A; Schmitz, L

    2016-04-05

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. © 2016 The Author(s).

  13. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology

    PubMed Central

    Schmitz, L.

    2016-01-01

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. PMID:26977068

  14. The Role of Semantics in Open-World, Integrative, Collaborative Science Data Platforms

    NASA Astrophysics Data System (ADS)

    Fox, Peter; Chen, Yanning; Wang, Han; West, Patrick; Erickson, John; Ma, Marshall

    2014-05-01

    As collaborative science spreads into more and more Earth and space science fields, both participants and funders are expressing stronger needs for highly functional data and information capabilities. Characteristics include a) easy to use, b) highly integrated, c) leverage investments, d) accommodate rapid technical change, and e) do not incur undue expense or time to build or maintain - these are not a small set of requirements. Based on our accumulated experience over the last ~ decade and several key technical approaches, we adapt, extend, and integrate several open source applications and frameworks to handle major portions of functionality for these platforms. This includes: an object-type repository, collaboration tools, identity management, all within a portal managing diverse content and applications. In this contribution, we present our methods and results of information models, adaptation, integration and evolution of a networked data science architecture based on several open source technologies (Drupal, VIVO, the Comprehensive Knowledge Archive Network; CKAN, and the Global Handle System; GHS). In particular we present the Deep Carbon Observatory - a platform for international science collaboration. We present and discuss key functional and non-functional attributes, and discuss the general applicability of the platform.

  15. Functional illness in primary care: dysfunction versus disease

    PubMed Central

    Williams, Nefyn; Wilkinson, Clare; Stott, Nigel; Menkes, David B

    2008-01-01

    Background The Biopsychosocial Model aims to integrate the biological, psychological and social components of illness, but integration is difficult in practice, particularly when patients consult with medically unexplained physical symptoms or functional illness. Discussion This Biopsychosocial Model was developed from General Systems Theory, which describes nature as a dynamic order of interacting parts and processes, from molecular to societal. Despite such conceptual progress, the biological, psychological, social and spiritual components of illness are seldom managed as an integrated whole in conventional medical practice. This is because the biomedical model can be easier to use, clinicians often have difficulty relinquishing a disease-centred approach to diagnosis, and either dismiss illness when pathology has been excluded, or explain all undifferentiated illness in terms of psychosocial factors. By contrast, traditional and complementary treatment systems describe reversible functional disturbances, and appear better at integrating the different components of illness. Conventional medicine retains the advantage of scientific method and an expanding evidence base, but needs to more effectively integrate psychosocial factors into assessment and management, notably of 'functional' illness. As an aid to integration, pathology characterised by structural change in tissues and organs is contrasted with dysfunction arising from disordered physiology or psychology that may occur independent of pathological change. Summary We propose a classification of illness that includes orthogonal dimensions of pathology and dysfunction to support a broadly based clinical approach to patients; adoption of which may lead to fewer inappropriate investigations and secondary care referrals and greater use of cognitive behavioural techniques, particularly when managing functional illness. PMID:18482442

  16. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  17. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  18. Achievements and Challenges in Computational Protein Design.

    PubMed

    Samish, Ilan

    2017-01-01

    Computational protein design (CPD), a yet evolving field, includes computer-aided engineering for partial or full de novo designs of proteins of interest. Designs are defined by a requested structure, function, or working environment. This chapter describes the birth and maturation of the field by presenting 101 CPD examples in a chronological order emphasizing achievements and pending challenges. Integrating these aspects presents the plethora of CPD approaches with the hope of providing a "CPD 101". These reflect on the broader structural bioinformatics and computational biophysics field and include: (1) integration of knowledge-based and energy-based methods, (2) hierarchical designated approach towards local, regional, and global motifs and the integration of high- and low-resolution design schemes that fit each such region, (3) systematic differential approaches towards different protein regions, (4) identification of key hot-spot residues and the relative effect of remote regions, (5) assessment of shape-complementarity, electrostatics and solvation effects, (6) integration of thermal plasticity and functional dynamics, (7) negative design, (8) systematic integration of experimental approaches, (9) objective cross-assessment of methods, and (10) successful ranking of potential designs. Future challenges also include dissemination of CPD software to the general use of life-sciences researchers and the emphasis of success within an in vivo milieu. CPD increases our understanding of protein structure and function and the relationships between the two along with the application of such know-how for the benefit of mankind. Applied aspects range from biological drugs, via healthier and tastier food products to nanotechnology and environmentally friendly enzymes replacing toxic chemicals utilized in the industry.

  19. The Toda lattice as a forced integrable system

    NASA Technical Reports Server (NTRS)

    Hansen, P. J.; Kaup, D. J.

    1985-01-01

    The analytic properties of the Jost functions for the inverse scattering transform associated with the forced Toda lattice are shown to determine the time evolution of this particular boundary value problem. It is suggested that inverse scattering methods may be used generally to analyze forced integrable systems. Thus an extension of the applicability of the inverse scattering transform is indicated.

  20. A boundary integral approach to the scattering of nonplanar acoustic waves by rigid bodies

    NASA Technical Reports Server (NTRS)

    Gallman, Judith M.; Myers, M. K.; Farassat, F.

    1990-01-01

    The acoustic scattering of an incident wave by a rigid body can be described by a singular Fredholm integral equation of the second kind. This equation is derived by solving the wave equation using generalized function theory, Green's function for the wave equation in unbounded space, and the acoustic boundary condition for a perfectly rigid body. This paper will discuss the derivation of the wave equation, its reformulation as a boundary integral equation, and the solution of the integral equation by the Galerkin method. The accuracy of the Galerkin method can be assessed by applying the technique outlined in the paper to reproduce the known pressure fields that are due to various point sources. From the analysis of these simpler cases, the accuracy of the Galerkin solution can be inferred for the scattered pressure field caused by the incidence of a dipole field on a rigid sphere. The solution by the Galerkin technique can then be applied to such problems as a dipole model of a propeller whose pressure field is incident on a rigid cylinder. This is the groundwork for modeling the scattering of rotating blade noise by airplane fuselages.

  1. Friendly Extensible Transfer Tool Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William P.; Gutierrez, Kenneth M.; McRee, Susan R.

    2016-04-15

    Often data transfer software is designed to meet specific requirements or apply to specific environments. Frequently, this requires source code integration for added functionality. An extensible data transfer framework is needed to more easily incorporate new capabilities, in modular fashion. Using FrETT framework, functionality may be incorporated (in many cases without need of source code) to handle new platform capabilities: I/O methods (e.g., platform specific data access), network transport methods, data processing (e.g., data compression.).

  2. Ab initio simulation of diffractometer instrumental function for high-resolution X-ray diffraction1

    PubMed Central

    Mikhalychev, Alexander; Benediktovitch, Andrei; Ulyanenkova, Tatjana; Ulyanenkov, Alex

    2015-01-01

    Modeling of the X-ray diffractometer instrumental function for a given optics configuration is important both for planning experiments and for the analysis of measured data. A fast and universal method for instrumental function simulation, suitable for fully automated computer realization and describing both coplanar and noncoplanar measurement geometries for any combination of X-ray optical elements, is proposed. The method can be identified as semi-analytical backward ray tracing and is based on the calculation of a detected signal as an integral of X-ray intensities for all the rays reaching the detector. The high speed of calculation is provided by the expressions for analytical integration over the spatial coordinates that describe the detection point. Consideration of the three-dimensional propagation of rays without restriction to the diffraction plane provides the applicability of the method for noncoplanar geometry and the accuracy for characterization of the signal from a two-dimensional detector. The correctness of the simulation algorithm is checked in the following two ways: by verifying the consistency of the calculated data with the patterns expected for certain simple limiting cases and by comparing measured reciprocal-space maps with the corresponding maps simulated by the proposed method for the same diffractometer configurations. Both kinds of tests demonstrate the agreement of the simulated instrumental function shape with the measured data. PMID:26089760

  3. Bio-inspired Cryo-ink Preserves Red Blood Cell Phenotype and Function during Nanoliter Vitrification

    PubMed Central

    Assal, Rami El; Guven, Sinan; Gurkan, Umut Atakan; Gozen, Irep; Shafiee, Hadi; Dalbeyber, Sedef; Abdalla, Noor; Thomas, Gawain; Fuld, Wendy; Illigens, Ben M.W.; Estanislau, Jessica; Khoory, Joseph; Kaufman, Richard; Zylberberg, Claudia; Lindeman, Neal; Wen, Qi; Ghiran, Ionita; Demirci, Utkan

    2014-01-01

    Current red blood cell cryopreservation methods utilize bulk volumes, causing cryo-injury of cells, which results in irreversible disruption of cell morphology, mechanics, and function. An innovative approach to preserve human red blood cell morphology, mechanics, and function following vitrification in nanoliter volumes is developed using a novel cryo-ink integrated with a bio-printing approach. PMID:25047246

  4. A fuzzy inventory model with acceptable shortage using graded mean integration value method

    NASA Astrophysics Data System (ADS)

    Saranya, R.; Varadarajan, R.

    2018-04-01

    In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.

  5. A method for computing the kernel of the downwash integral equation for arbitrary complex frequencies

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.; Rowe, W. S.

    1984-01-01

    For the design of active controls to stabilize flight vehicles, which requires the use of unsteady aerodynamics that are valid for arbitrary complex frequencies, algorithms are derived for evaluating the nonelementary part of the kernel of the integral equation that relates unsteady pressure to downwash. This part of the kernel is separated into an infinite limit integral that is evaluated using Bessel and Struve functions and into a finite limit integral that is expanded in series and integrated termwise in closed form. The developed series expansions gave reliable answers for all complex reduced frequencies and executed faster than exponential approximations for many pressure stations.

  6. On one solution of Volterra integral equations of second kind

    NASA Astrophysics Data System (ADS)

    Myrhorod, V.; Hvozdeva, I.

    2016-10-01

    A solution of Volterra integral equations of the second kind with separable and difference kernels based on solutions of corresponding equations linking the kernel and resolvent is suggested. On the basis of a discrete functions class, the equations linking the kernel and resolvent are obtained and the methods of their analytical solutions are proposed. A mathematical model of the gas-turbine engine state modification processes in the form of Volterra integral equation of the second kind with separable kernel is offered.

  7. The ATOMFT integrator - Using Taylor series to solve ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Berryman, Kenneth W.; Stanford, Richard H.; Breckheimer, Peter J.

    1988-01-01

    This paper discusses the application of ATOMFT, an integration package based on Taylor series solution with a sophisticated user interface. ATOMFT has the capabilities to allow the implementation of user defined functions and the solution of stiff and algebraic equations. Detailed examples, including the solutions to several astrodynamics problems, are presented. Comparisons with its predecessor ATOMCC and other modern integrators indicate that ATOMFT is a fast, accurate, and easy method to use to solve many differential equation problems.

  8. Integrating Substrateless Electrospinning with Textile Technology for Creating Biodegradable Three-Dimensional Structures.

    PubMed

    Joseph, John; Nair, Shantikumar V; Menon, Deepthy

    2015-08-12

    The present study describes a unique way of integrating substrateless electrospinning process with textile technology. We developed a new collector design that provided a pressure-driven, localized cotton-wool structure in free space from which continuous high strength yarns were drawn. An advantage of this integration was that the textile could be drug/dye loaded and be developed into a core-sheath architecture with greater functionality. This method could produce potential nanotextiles for various biomedical applications.

  9. Operational Solution to the Nonlinear Klein-Gordon Equation

    NASA Astrophysics Data System (ADS)

    Bengochea, G.; Verde-Star, L.; Ortigueira, M.

    2018-05-01

    We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013

  10. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  11. Long-range corrected density functional theory with accelerated Hartree-Fock exchange integration using a two-Gaussian operator [LC-ωPBE(2Gau)].

    PubMed

    Song, Jong-Won; Hirao, Kimihiko

    2015-10-14

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.

  12. Dissociable Patterns of Neural Activity during Response Inhibition in Depressed Adolescents with and without Suicidal Behavior

    ERIC Educational Resources Information Center

    Pan, Lisa A.; Batezati-Alves, Silvia C.; Almeida, Jorge R. C.; Segreti, AnnaMaria; Akkal, Dalila; Hassel, Stefanie; Lakdawala, Sara; Brent, David A.; Phillips, Mary L.

    2011-01-01

    Objectives: Impaired attentional control and behavioral control are implicated in adult suicidal behavior. Little is known about the functional integrity of neural circuitry supporting these processes in suicidal behavior in adolescence. Method: Functional magnetic resonance imaging was used in 15 adolescent suicide attempters with a history of…

  13. A Comparison of Web-Based and Face-to-Face Functional Measurement Experiments

    ERIC Educational Resources Information Center

    Van Acker, Frederik; Theuns, Peter

    2010-01-01

    Information Integration Theory (IIT) is concerned with how people combine information into an overall judgment. A method is hereby presented to perform Functional Measurement (FM) experiments, the methodological counterpart of IIT, on the Web. In a comparison of Web-based FM experiments, face-to-face experiments, and computer-based experiments in…

  14. A novel slice preparation to study medullary oromotor and autonomic circuits in vitro

    PubMed Central

    Nasse, Jason S.

    2014-01-01

    Background The medulla is capable of controlling and modulating ingestive behavior and gastrointestinal function. These two functions, which are critical to maintaining homeostasis, are governed by an interconnected group of nuclei dispersed throughout the medulla. As such, in vitro experiments to study the neurophysiologic details of these connections have been limited by spatial constraints of conventional slice preparations. New method This study demonstrates a novel method of sectioning the medulla so that sensory, integrative, and motor nuclei that innervate the gastrointestinal tract and the oral cavity remain intact. Results: Immunohistochemical staining against choline-acetyl-transferase and dopamine-β-hydroxylase demonstrated that within a 450 μm block of tissue we are able to capture sensory, integrative and motor nuclei that are critical to oromotor and gastrointestinal function. Within slice tracing shows that axonal projections from the NST to the reticular formation and from the reticular formation to the hypoglossal motor nucleus (mXII) persist. Live-cell calcium imaging of the slice demonstrates that stimulation of either the rostral or caudal NST activates neurons throughout the NST, as well as the reticular formation and mXII. Comparison with existing methods This new method of sectioning captures a majority of the nuclei that are active when ingesting a meal. Tradition planes of section, i.e. coronal, horizontal or sagittal, contain only a limited portion of the substrate. Conclusions Our results demonstrate that both anatomical and physiologic connections of oral and visceral sensory nuclei that project to integrative and motor nuclei remain intact with this new plane of section. PMID:25196216

  15. The assessment of cognitive function in older adult patients with chronic kidney disease: an integrative review.

    PubMed

    Hannan, Mary; Steffen, Alana; Quinn, Lauretta; Collins, Eileen G; Phillips, Shane A; Bronas, Ulf G

    2018-05-25

    Chronic kidney disease (CKD) is a common chronic condition in older adults that is associated with cognitive decline. However, the exact prevalence of cognitive impairment in older adults with CKD is unclear likely due to the variety of methods utilized to assess cognitive function. The purpose of this integrative review is to determine how cognitive function is most frequently assessed in older adult patients with CKD. Five electronic databases were searched to explore relevant literature related to cognitive function assessment in older adult patients with CKD. Inclusion and exclusion criteria were created to focus the search to the assessment of cognitive function with standardized cognitive tests in older adults with CKD, not on renal replacement therapy. Through the search methods, 36 articles were found that fulfilled the purpose of the review. There were 36 different types of cognitive tests utilized in the included articles, with each study utilizing between one and 12 tests. The most commonly utilized cognitive test was the Mini Mental State Exam (MMSE), followed by tests of digit symbol substitution and verbal fluency. The most commonly assessed aspect of cognitive function was global cognition. The assessment of cognitive function in older adults with CKD with standardized tests is completed in various ways. Unfortunately, the common methods of assessment of cognitive function may not be fully examining the domains of impairment commonly found in older adults with CKD. Further research is needed to identify the ideal cognitive test to best assess older adults with CKD for cognitive impairment.

  16. Accelerometer method and apparatus for integral display and control functions

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1992-06-01

    Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily mountable on the machinery being monitored and having capabilities described will be appreciated by those working in the art.

  17. Accelerometer method and apparatus for integral display and control functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1992-01-01

    Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily mountable on the machinery being monitored and having capabilities described will be appreciated by those working in the art.

  18. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  19. Measuring use of electronic health record functionality using system audit information.

    PubMed

    Bowes, Watson A

    2010-01-01

    Meaningful and efficient methods for measuring Electronic Health Record (EHR) adoption and functional usage patterns have recently become important for hospitals, clinics, and health care networks in the United State due to recent government initiatives to increase EHR use. To date, surveys have been the method of choice to measure EHR adoption. This paper describes another method for measuring EHR adoption which capitalizes on audit logs, which are often common components of modern EHRs. An Audit Data Mart is described which identified EHR functionality within 836 Departments, within 22 Hospitals and 170 clinics at Intermountain Healthcare, a large integrated delivery system. The Audit Data Mart successfully identified important and differing EHR functional usage patterns. These patterns were useful in strategic planning, tracking EHR implementations, and will likely be utilized to assist in documentation of "Meaningful Use" of EHR functionality.

  20. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  1. Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.

    PubMed

    Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric

    2010-07-20

    Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.

  2. Improving predicted protein loop structure ranking using a Pareto-optimality consensus method

    PubMed Central

    2010-01-01

    Background Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. Results We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of ~20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. Conclusions By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set. PMID:20642859

  3. Some simple solutions of Schrödinger's equation for a free particle or for an oscillator

    NASA Astrophysics Data System (ADS)

    Andrews, Mark

    2018-05-01

    For a non-relativistic free particle, we show that the evolution of some simple initial wave functions made up of linear segments can be expressed in terms of Fresnel integrals. Examples include the square wave function and the triangular wave function. The method is then extended to wave functions made from quadratic elements. The evolution of all these initial wave functions can also be found for the harmonic oscillator by a transformation of the free evolutions.

  4. Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions

    NASA Technical Reports Server (NTRS)

    Rauch, Kevin P.; Holman, Matthew

    1999-01-01

    We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.

  5. Stepwise Connectivity of the Modal Cortex Reveals the Multimodal Organization of the Human Brain

    PubMed Central

    Sepulcre, Jorge; Sabuncu, Mert R.; Yeo, Thomas B.; Liu, Hesheng; Johnson, Keith A.

    2012-01-01

    How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipito-temporal, prefrontal and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called Stepwise Functional Connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain. PMID:22855814

  6. Differential Galois theory and non-integrability of planar polynomial vector fields

    NASA Astrophysics Data System (ADS)

    Acosta-Humánez, Primitivo B.; Lázaro, J. Tomás; Morales-Ruiz, Juan J.; Pantazi, Chara

    2018-06-01

    We study a necessary condition for the integrability of the polynomials vector fields in the plane by means of the differential Galois Theory. More concretely, by means of the variational equations around a particular solution it is obtained a necessary condition for the existence of a rational first integral. The method is systematic starting with the first order variational equation. We illustrate this result with several families of examples. A key point is to check whether a suitable primitive is elementary or not. Using a theorem by Liouville, the problem is equivalent to the existence of a rational solution of a certain first order linear equation, the Risch equation. This is a classical problem studied by Risch in 1969, and the solution is given by the "Risch algorithm". In this way we point out the connection of the non integrability with some higher transcendent functions, like the error function.

  7. On the classical and quantum integrability of systems of resonant oscillators

    NASA Astrophysics Data System (ADS)

    Marino, Massimo

    2017-01-01

    We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.

  8. The Hurwitz Enumeration Problem of Branched Covers and Hodge Integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yun S.

    We use algebraic methods to compute the simple Hurwitz numbers for arbitrary source and target Riemann surfaces. For an elliptic curve target, we reproduce the results previously obtained by string theorists. Motivated by the Gromov-Witten potentials, we find a general generating function for the simple Hurwitz numbers in terms of the representation theory of the symmetric group S{sub n}. We also find a generating function for Hodge integrals on the moduli space {bar M}{sub g,2} of Riemann surfaces with two marked points, similar to that found by Faber and Pandharipande for the case of one marked point.

  9. Contact problem on indentation of an elastic half-plane with an inhomogeneous coating by a flat punch in the presence of tangential stresses on a surface

    NASA Astrophysics Data System (ADS)

    Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.

    2018-05-01

    Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.

  10. A new operational approach for solving fractional variational problems depending on indefinite integrals

    NASA Astrophysics Data System (ADS)

    Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.

    2018-04-01

    In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm

  11. Density-functional theory simulation of large quantum dots

    NASA Astrophysics Data System (ADS)

    Jiang, Hong; Baranger, Harold U.; Yang, Weitao

    2003-10-01

    Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.

  12. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  13. A finite element-boundary integral method for cavities in a circular cylinder

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.

    1992-01-01

    Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. However, due to a lack of rigorous mathematical models for conformal antenna arrays, antenna designers resort to measurement and planar antenna concepts for designing non-planar conformal antennas. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We extend this formulation to conformal arrays on large metallic cylinders. In this report, we develop the mathematical formulation. In particular, we discuss the shape functions, the resulting finite elements and the boundary integral equations, and the solution of the conformal finite element-boundary integral system. Some validation results are presented and we further show how this formulation can be applied with minimal computational and memory resources.

  14. Two-loop renormalization of the quark propagator in the light-cone gauge

    NASA Astrophysics Data System (ADS)

    Williams, James Daniel

    The divergent parts of the five two-loop quark self- energy diagrams of quantum chromodynamics are evaluated in the noncovariant light-cone gauge. Most of the Feynman integrals are computed by means of the powerful matrix integration method, originally developed for the author's Master's thesis. From the results of the integrations, it is shown how to renormalize the quark mass and wave function in such a way that the effective quark propagator is rendered finite at two-loop order. The required counterterms turn out to be local functions of the quark momentum, due to cancellation of the nonlocal divergent parts of the two-loop integrals with equal and opposite contributions from one-loop counterterm subtraction diagrams. The final form of the counterterms is seen to be consistent with the renormalization framework proposed by Bassetto, Dalbosco, and Soldati, in which all noncovariant divergences are absorbed into the wave function normalizations. It also turns out that the mass renormalization d m is the same in the light-cone gauge as it is in a general covariant gauge, at least up to two-loop order.

  15. Sensory integration functions of children with cochlear implants.

    PubMed

    Koester, AnjaLi Carrasco; Mailloux, Zoe; Coleman, Gina Geppert; Mori, Annie Baltazar; Paul, Steven M; Blanche, Erna; Muhs, Jill A; Lim, Deborah; Cermak, Sharon A

    2014-01-01

    OBJECTIVE. We investigated sensory integration (SI) function in children with cochlear implants (CIs). METHOD. We analyzed deidentified records from 49 children ages 7 mo to 83 mo with CIs. Records included Sensory Integration and Praxis Tests (SIPT), Sensory Processing Measure (SPM), Sensory Profile (SP), Developmental Profile 3 (DP-3), and Peabody Developmental Motor Scales (PDMS), with scores depending on participants' ages. We compared scores with normative population mean scores and with previously identified patterns of SI dysfunction. RESULTS. One-sample t tests revealed significant differences between children with CIs and the normative population on the majority of the SIPT items associated with the vestibular and proprioceptive bilateral integration and sequencing (VPBIS) pattern. Available scores for children with CIs on the SPM, SP, DP-3, and PDMS indicated generally typical ratings. CONCLUSION. SIPT scores in a sample of children with CIs reflected the VPBIS pattern of SI dysfunction, demonstrating the need for further examination of SI functions in children with CIs during occupational therapy assessment and intervention planning. Copyright © 2014 by the American Occupational Therapy Association, Inc.

  16. FERMI: a digital Front End and Readout MIcrosystem for high resolution calorimetry

    NASA Astrophysics Data System (ADS)

    Alexanian, H.; Appelquist, G.; Bailly, P.; Benetta, R.; Berglund, S.; Bezamat, J.; Blouzon, F.; Bohm, C.; Breveglieri, L.; Brigati, S.; Cattaneo, P. W.; Dadda, L.; David, J.; Engström, M.; Genat, J. F.; Givoletti, M.; Goggi, V. G.; Gong, S.; Grieco, G. M.; Hansen, M.; Hentzell, H.; Holmberg, T.; Höglund, I.; Inkinen, S. J.; Kerek, A.; Landi, C.; Ledortz, O.; Lippi, M.; Lofstedt, B.; Lund-Jensen, B.; Maloberti, F.; Mutz, S.; Nayman, P.; Piuri, V.; Polesello, G.; Sami, M.; Savoy-Navarro, A.; Schwemling, P.; Stefanelli, R.; Sundblad, R.; Svensson, C.; Torelli, G.; Vanuxem, J. P.; Yamdagni, N.; Yuan, J.; Ödmark, A.; Fermi Collaboration

    1995-02-01

    We present a digital solution for the front-end electronics of high resolution calorimeters at future colliders. It is based on analogue signal compression, high speed {A}/{D} converters, a fully programmable pipeline and a digital signal processing (DSP) chain with local intelligence and system supervision. This digital solution is aimed at providing maximal front-end processing power by performing waveform analysis using DSP methods. For the system integration of the multichannel device a multi-chip, silicon-on-silicon multi-chip module (MCM) has been adopted. This solution allows a high level of integration of complex analogue and digital functions, with excellent flexibility in mixing technologies for the different functional blocks. This type of multichip integration provides a high degree of reliability and programmability at both the function and the system level, with the additional possibility of customising the microsystem to detector-specific requirements. For enhanced reliability in high radiation environments, fault tolerance strategies, i.e. redundancy, reconfigurability, majority voting and coding for error detection and correction, are integrated into the design.

  17. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  18. Fusing literature and full network data improves disease similarity computation.

    PubMed

    Li, Ping; Nie, Yaling; Yu, Jingkai

    2016-08-30

    Identifying relatedness among diseases could help deepen understanding for the underlying pathogenic mechanisms of diseases, and facilitate drug repositioning projects. A number of methods for computing disease similarity had been developed; however, none of them were designed to utilize information of the entire protein interaction network, using instead only those interactions involving disease causing genes. Most of previously published methods required gene-disease association data, unfortunately, many diseases still have very few or no associated genes, which impeded broad adoption of those methods. In this study, we propose a new method (MedNetSim) for computing disease similarity by integrating medical literature and protein interaction network. MedNetSim consists of a network-based method (NetSim), which employs the entire protein interaction network, and a MEDLINE-based method (MedSim), which computes disease similarity by mining the biomedical literature. Among function-based methods, NetSim achieved the best performance. Its average AUC (area under the receiver operating characteristic curve) reached 95.2 %. MedSim, whose performance was even comparable to some function-based methods, acquired the highest average AUC in all semantic-based methods. Integration of MedSim and NetSim (MedNetSim) further improved the average AUC to 96.4 %. We further studied the effectiveness of different data sources. It was found that quality of protein interaction data was more important than its volume. On the contrary, higher volume of gene-disease association data was more beneficial, even with a lower reliability. Utilizing higher volume of disease-related gene data further improved the average AUC of MedNetSim and NetSim to 97.5 % and 96.7 %, respectively. Integrating biomedical literature and protein interaction network can be an effective way to compute disease similarity. Lacking sufficient disease-related gene data, literature-based methods such as MedSim can be a great addition to function-based algorithms. It may be beneficial to steer more resources torward studying gene-disease associations and improving the quality of protein interaction data. Disease similarities can be computed using the proposed methods at http:// www.digintelli.com:8000/ .

  19. MDAS: an integrated system for metabonomic data analysis.

    PubMed

    Liu, Juan; Li, Bo; Xiong, Jiang-Hui

    2009-03-01

    Metabonomics, the latest 'omics' research field, shows great promise as a tool in biomarker discovery, drug efficacy and toxicity analysis, disease diagnosis and prognosis. One of the major challenges now facing researchers is how to process this data to yield useful information about a biological system, e.g., the mechanism of diseases. Traditional methods employed in metabonomic data analysis use multivariate analysis methods developed independently in chemometrics research. Additionally, with the development of machine learning approaches, some methods such as SVMs also show promise for use in metabonomic data analysis. Aside from the application of general multivariate analysis and machine learning methods to this problem, there is also a need for an integrated tool customized for metabonomic data analysis which can be easily used by biologists to reveal interesting patterns in metabonomic data.In this paper, we present a novel software tool MDAS (Metabonomic Data Analysis System) for metabonomic data analysis which integrates traditional chemometrics methods and newly introduced machine learning approaches. MDAS contains a suite of functional models for metabonomic data analysis and optimizes the flow of data analysis. Several file formats can be accepted as input. The input data can be optionally preprocessed and can then be processed with operations such as feature analysis and dimensionality reduction. The data with reduced dimensionalities can be used for training or testing through machine learning models. The system supplies proper visualization for data preprocessing, feature analysis, and classification which can be a powerful function for users to extract knowledge from the data. MDAS is an integrated platform for metabonomic data analysis, which transforms a complex analysis procedure into a more formalized and simplified one. The software package can be obtained from the authors.

  20. Predicting Human Protein Subcellular Locations by the Ensemble of Multiple Predictors via Protein-Protein Interaction Network with Edge Clustering Coefficients

    PubMed Central

    Du, Pufeng; Wang, Lusheng

    2014-01-01

    One of the fundamental tasks in biology is to identify the functions of all proteins to reveal the primary machinery of a cell. Knowledge of the subcellular locations of proteins will provide key hints to reveal their functions and to understand the intricate pathways that regulate biological processes at the cellular level. Protein subcellular location prediction has been extensively studied in the past two decades. A lot of methods have been developed based on protein primary sequences as well as protein-protein interaction network. In this paper, we propose to use the protein-protein interaction network as an infrastructure to integrate existing sequence based predictors. When predicting the subcellular locations of a given protein, not only the protein itself, but also all its interacting partners were considered. Unlike existing methods, our method requires neither the comprehensive knowledge of the protein-protein interaction network nor the experimentally annotated subcellular locations of most proteins in the protein-protein interaction network. Besides, our method can be used as a framework to integrate multiple predictors. Our method achieved 56% on human proteome in absolute-true rate, which is higher than the state-of-the-art methods. PMID:24466278

  1. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  2. MicroScope-an integrated resource for community expertise of gene functions and comparative analysis of microbial genomic and metabolic data.

    PubMed

    Médigue, Claudine; Calteau, Alexandra; Cruveiller, Stéphane; Gachet, Mathieu; Gautreau, Guillaume; Josso, Adrien; Lajus, Aurélie; Langlois, Jordan; Pereira, Hugo; Planel, Rémi; Roche, David; Rollin, Johan; Rouy, Zoe; Vallenet, David

    2017-09-12

    The overwhelming list of new bacterial genomes becoming available on a daily basis makes accurate genome annotation an essential step that ultimately determines the relevance of thousands of genomes stored in public databanks. The MicroScope platform (http://www.genoscope.cns.fr/agc/microscope) is an integrative resource that supports systematic and efficient revision of microbial genome annotation, data management and comparative analysis. Starting from the results of our syntactic, functional and relational annotation pipelines, MicroScope provides an integrated environment for the expert annotation and comparative analysis of prokaryotic genomes. It combines tools and graphical interfaces to analyze genomes and to perform the manual curation of gene function in a comparative genomics and metabolic context. In this article, we describe the free-of-charge MicroScope services for the annotation and analysis of microbial (meta)genomes, transcriptomic and re-sequencing data. Then, the functionalities of the platform are presented in a way providing practical guidance and help to the nonspecialists in bioinformatics. Newly integrated analysis tools (i.e. prediction of virulence and resistance genes in bacterial genomes) and original method recently developed (the pan-genome graph representation) are also described. Integrated environments such as MicroScope clearly contribute, through the user community, to help maintaining accurate resources. © The Author 2017. Published by Oxford University Press.

  3. Almost analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2017-11-01

    We present an almost analytical new approach to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of solving this matrix eigenvalue problem purely numerically, which may suffer from the computational inaccuracy for big data, first, we consider a pair of integral and differential equations, which are related to the so-called prolate spheroidal wave functions (PSWF). For the PSWF differential equation, the pair of the eigenvectors (PSWF) and eigenvalues can be obtained from a relatively small number of analytical Legendre functions. Then, the eigenvalues in the PSWF integral equation are expressed in terms of functional values of the PSWF and the eigenvalues of the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data; ordinary irregular waves and rogue waves. We found that the present almost analytical method is better than the conventional data-independent Fourier representation and, also, the conventional direct numerical K-L representation in terms of both accuracy and computational cost. This work was supported by the National Research Foundation of Korea (NRF). (NRF-2017R1D1A1B03028299).

  4. Development and applications of 3-dimensional integration nanotechnologies.

    PubMed

    Kim, Areum; Choi, Eunmi; Son, Hyungbin; Pyo, Sung Gyu

    2014-02-01

    Unlike conventional two-dimensional (2D) planar structures, signal or power is supplied through through-silicon via (TSV) in three-dimensional (3D) integration technology to replace wires for binding the chip/wafer. TSVs have becomes an essential technology, as they satisfy Moore's law. This 3D integration technology enables system and sensor functions at a nanoscale via the implementation of a highly integrated nano-semiconductor as well as the fabrication of a single chip with multiple functions. Thus, this technology is considered to be a new area of development for the systemization of the nano-bio area. In this review paper, the basic technology required for such 3D integration is described and methods to measure the bonding strength in order to measure the void occurring during bonding are introduced. Currently, CMOS image sensors and memory chips associated with nanotechnology are being realized on the basis of 3D integration technology. In this paper, we intend to describe the applications of high-performance nano-biosensor technology currently under development and the direction of development of a high performance lab-on-a-chip (LOC).

  5. Graphene/Si CMOS Hybrid Hall Integrated Circuits

    PubMed Central

    Huang, Le; Xu, Huilong; Zhang, Zhiyong; Chen, Chengying; Jiang, Jianhua; Ma, Xiaomeng; Chen, Bingyan; Li, Zishen; Zhong, Hua; Peng, Lian-Mao

    2014-01-01

    Graphene/silicon CMOS hybrid integrated circuits (ICs) should provide powerful functions which combines the ultra-high carrier mobility of graphene and the sophisticated functions of silicon CMOS ICs. But it is difficult to integrate these two kinds of heterogeneous devices on a single chip. In this work a low temperature process is developed for integrating graphene devices onto silicon CMOS ICs for the first time, and a high performance graphene/CMOS hybrid Hall IC is demonstrated. Signal amplifying/process ICs are manufactured via commercial 0.18 um silicon CMOS technology, and graphene Hall elements (GHEs) are fabricated on top of the passivation layer of the CMOS chip via a low-temperature micro-fabrication process. The sensitivity of the GHE on CMOS chip is further improved by integrating the GHE with the CMOS amplifier on the Si chip. This work not only paves the way to fabricate graphene/Si CMOS Hall ICs with much higher performance than that of conventional Hall ICs, but also provides a general method for scalable integration of graphene devices with silicon CMOS ICs via a low-temperature process. PMID:24998222

  6. Graphene/Si CMOS hybrid hall integrated circuits.

    PubMed

    Huang, Le; Xu, Huilong; Zhang, Zhiyong; Chen, Chengying; Jiang, Jianhua; Ma, Xiaomeng; Chen, Bingyan; Li, Zishen; Zhong, Hua; Peng, Lian-Mao

    2014-07-07

    Graphene/silicon CMOS hybrid integrated circuits (ICs) should provide powerful functions which combines the ultra-high carrier mobility of graphene and the sophisticated functions of silicon CMOS ICs. But it is difficult to integrate these two kinds of heterogeneous devices on a single chip. In this work a low temperature process is developed for integrating graphene devices onto silicon CMOS ICs for the first time, and a high performance graphene/CMOS hybrid Hall IC is demonstrated. Signal amplifying/process ICs are manufactured via commercial 0.18 um silicon CMOS technology, and graphene Hall elements (GHEs) are fabricated on top of the passivation layer of the CMOS chip via a low-temperature micro-fabrication process. The sensitivity of the GHE on CMOS chip is further improved by integrating the GHE with the CMOS amplifier on the Si chip. This work not only paves the way to fabricate graphene/Si CMOS Hall ICs with much higher performance than that of conventional Hall ICs, but also provides a general method for scalable integration of graphene devices with silicon CMOS ICs via a low-temperature process.

  7. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  8. Exact soliton of (2 + 1)-dimensional fractional Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Rizvi, S. T. R.; Ali, K.; Bashir, S.; Younis, M.; Ashraf, R.; Ahmad, M. O.

    2017-07-01

    The nonlinear fractional Schrödinger equation is the basic equation of fractional quantum mechanics introduced by Nick Laskin in 2002. We apply three tools to solve this mathematical-physical model. First, we find the solitary wave solutions including the trigonometric traveling wave solutions, bell and kink shape solitons using the F-expansion and Improve F-expansion method. We also obtain the soliton solution, singular soliton solutions, rational function solution and elliptic integral function solutions, with the help of the extended trial equation method.

  9. Multi-scale integration and predictability in resting state brain activity

    PubMed Central

    Kolchinsky, Artemy; van den Heuvel, Martijn P.; Griffa, Alessandra; Hagmann, Patric; Rocha, Luis M.; Sporns, Olaf; Goñi, Joaquín

    2014-01-01

    The human brain displays heterogeneous organization in both structure and function. Here we develop a method to characterize brain regions and networks in terms of information-theoretic measures. We look at how these measures scale when larger spatial regions as well as larger connectome sub-networks are considered. This framework is applied to human brain fMRI recordings of resting-state activity and DSI-inferred structural connectivity. We find that strong functional coupling across large spatial distances distinguishes functional hubs from unimodal low-level areas, and that this long-range functional coupling correlates with structural long-range efficiency on the connectome. We also find a set of connectome regions that are both internally integrated and coupled to the rest of the brain, and which resemble previously reported resting-state networks. Finally, we argue that information-theoretic measures are useful for characterizing the functional organization of the brain at multiple scales. PMID:25104933

  10. Role of binding entropy in the refinement of protein-ligand docking predictions: analysis based on the use of 11 scoring functions.

    PubMed

    Ruvinsky, Anatoly M

    2007-06-01

    We present results of testing the ability of eleven popular scoring functions to predict native docked positions using a recently developed method (Ruvinsky and Kozintsev, J Comput Chem 2005, 26, 1089) for estimation the entropy contributions of relative motions to protein-ligand binding affinity. The method is based on the integration of the configurational integral over clusters obtained from multiple docked positions. We use a test set of 100 PDB protein-ligand complexes and ensembles of 101 docked positions generated by (Wang et al. J Med Chem 2003, 46, 2287) for each ligand in the test set. To test the suggested method we compared the averaged root-mean square deviations (RMSD) of the top-scored ligand docked positions, accounting and not accounting for entropy contributions, relative to the experimentally determined positions. We demonstrate that the method increases docking accuracy by 10-21% when used in conjunction with the AutoDock scoring function, by 2-25% with G-Score, by 7-41% with D-Score, by 0-8% with LigScore, by 1-6% with PLP, by 0-12% with LUDI, by 2-8% with F-Score, by 7-29% with ChemScore, by 0-9% with X-Score, by 2-19% with PMF, and by 1-7% with DrugScore. We also compared the performance of the suggested method with the method based on ranking by cluster occupancy only. We analyze how the choice of a clustering-RMSD and a low bound of dense clusters impacts on docking accuracy of the scoring methods. We derive optimal intervals of the clustering-RMSD for 11 scoring functions.

  11. Preliminary candidate advanced avionics system for general aviation

    NASA Technical Reports Server (NTRS)

    Mccalla, T. M.; Grismore, F. L.; Greatline, S. E.; Birkhead, L. M.

    1977-01-01

    An integrated avionics system design was carried out to the level which indicates subsystem function, and the methods of overall system integration. Sufficient detail was included to allow identification of possible system component technologies, and to perform reliability, modularity, maintainability, cost, and risk analysis upon the system design. Retrofit to older aircraft, availability of this system to the single engine two place aircraft, was considered.

  12. Computer-generated formulas for three-center nuclear-attraction integrals (electrostatic potential) for Slater-type orbitals

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1984-01-01

    The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.

  13. Integral formulae of the canonical correlation functions for the one dimensional transverse Ising model

    NASA Astrophysics Data System (ADS)

    Inoue, Makoto

    2017-12-01

    Some new formulae of the canonical correlation functions for the one dimensional quantum transverse Ising model are found by the ST-transformation method using a Morita's sum rule and its extensions for the two dimensional classical Ising model. As a consequence we obtain a time-independent term of the dynamical correlation functions. Differences of quantum version and classical version of these formulae are also discussed.

  14. Calculation of the Full Scattering Amplitude without Partial Wave Decomposition II

    NASA Technical Reports Server (NTRS)

    Shertzer, J.; Temkin, A.

    2003-01-01

    As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE) can be reduced to a 2d partial differential equation (pde), and was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation. The resultant equation can be reduced to a pair of coupled pde's, to which the finite element method can still be applied. The resultant scattering amplitudes, both singlet and triplet, as a function of angle can be calculated for various energies. The results are in excellent agreement with converged partial wave results.

  15. Design and characterization of microcapsules-integrated collagen matrixes as multifunctional three-dimensional scaffolds for soft tissue engineering.

    PubMed

    Del Mercato, Loretta L; Passione, Laura Gioia; Izzo, Daniela; Rinaldi, Rosaria; Sannino, Alessandro; Gervaso, Francesca

    2016-09-01

    Three-dimensional (3D) porous scaffolds based on collagen are promising candidates for soft tissue engineering applications. The addition of stimuli-responsive carriers (nano- and microparticles) in the current approaches to tissue reconstruction and repair brings about novel challenges in the design and conception of carrier-integrated polymer scaffolds. In this study, a facile method was developed to functionalize 3D collagen porous scaffolds with biodegradable multilayer microcapsules. The effects of the capsule charge as well as the influence of the functionalization methods on the binding efficiency to the scaffolds were studied. It was found that the binding of cationic microcapsules was higher than that of anionic ones, and application of vacuum during scaffolds functionalization significantly hindered the attachment of the microcapsules to the collagen matrix. The physical properties of microcapsules-integrated scaffolds were compared to pristine scaffolds. The modified scaffolds showed swelling ratios, weight losses and mechanical properties similar to those of unmodified scaffolds. Finally, in vitro diffusional tests proved that the collagen scaffolds could stably retain the microcapsules over long incubation time in Tris-HCl buffer at 37°C without undergoing morphological changes, thus confirming their suitability for tissue engineering applications. The obtained results indicate that by tuning the charge of the microcapsules and by varying the fabrication conditions, collagen scaffolds patterned with high or low number of microcapsules can be obtained, and that the microcapsules-integrated scaffolds fully retain their original physical properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Multifunctionality assessment of urban agriculture in Beijing City, China.

    PubMed

    Peng, Jian; Liu, Zhicong; Liu, Yanxu; Hu, Xiaoxu; Wang, An

    2015-12-15

    As an important approach to the realization of agricultural sustainable development, multifunctionality has become a hot spot in the field of urban agriculture. Taking 13 agricultural counties of Beijing City as the assessing units, this study selects 10 assessing index from ecological, economic and social aspects, determines the index weight using Analytic Hierarchy Process (AHP) method, and establishes an index system for the integrated agricultural function. Based on standardized data from agricultural census and remote sensing, the integrated function and multifunctionality of urban agriculture in Beijing City are assessed through the index grade mapping. The results show that agricultural counties with the highest score in ecological, economic, and social function are Yanqing, Changping, and Miyun, respectively; and the greatest disparity among those counties is economic function, followed by social and ecological function. Topography and human disturbance may be the factors that affect integrated agricultural function. The integrated agricultural function of Beijing rises at the beginning then drops later with the increase of mean slope, average altitude, and distance from the city. The whole city behaves balance among ecological, economic, and social functions at the macro level, with 8 out of the 13 counties belonging to ecology-society-economy balanced areas, while no county is dominant in only one of the three functions. On the micro scale, however, different counties have their own functional inclination: Miyun, Yanqing, Mentougou, and Fengtai are ecology-society dominant, and Tongzhou is ecology-economy dominant. The agricultural multifunctionality in Beijing City declines from the north to the south, with Pinggu having the most significant agricultural multifunctionality. The results match up well with the objective condition of Beijing's urban agriculture planning, which has proved the methodological rationality of the assessment to a certain extent. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Applied Mathematical Methods in Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Masujima, Michio

    2005-04-01

    All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.

  18. On singular and highly oscillatory properties of the Green function for ship motions

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-Bo; Xiong Wu, Guo

    2001-10-01

    The Green function used for analysing ship motions in waves is the velocity potential due to a point source pulsating and advancing at a uniform forward speed. The behaviour of this function is investigated, in particular for the case when the source is located at or close to the free surface. In the far field, the Green function is represented by a single integral along one closed dispersion curve and two open dispersion curves. The single integral along the open dispersion curves is analysed based on the asymptotic expansion of a complex error function. The singular and highly oscillatory behaviour of the Green function is captured, which shows that the Green function oscillates with indefinitely increasing amplitude and indefinitely decreasing wavelength, when a field point approaches the track of the source point at the free surface. This sheds some light on the nature of the difficulties in the numerical methods used for predicting the motion of a ship advancing in waves.

  19. A statistical framework to predict functional non-coding regions in the human genome through integrated analysis of annotation data.

    PubMed

    Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu

    2015-05-27

    Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.

  20. Nonalgebraic integrability of one reversible dynamical system of the Cremona type

    NASA Astrophysics Data System (ADS)

    Rerikh, K. V.

    1998-05-01

    A reversible dynamical system (RDS) and a system of nonlinear functional equations, defined by a certain rational quadratic Cremona mapping and arising from the static model of the dispersion approach in the theory of strong interactions [the Chew-Low-type equations with crossing-symmetry matrix A(l,1)], are considered. This RDS is split into one- and two-dimensional ones. An explicit Cremona transformation that completely determines the exact solution of the two-dimensional system is found. This solution depends on an odd function satisfying a nonlinear autonomous three-point functional equation. Nonalgebraic integrability of RDS under consideration is proved using the method of Poincaré normal forms and the Siegel theorem on biholomorphic linearization of a mapping at a nonresonant fixed point.

  1. Kinetic-energy matrix elements for atomic Hylleraas-CI wave functions.

    PubMed

    Harris, Frank E

    2016-05-28

    Hylleraas-CI is a superposition-of-configurations method in which each configuration is constructed from a Slater-type orbital (STO) product to which is appended (linearly) at most one interelectron distance rij. Computations of the kinetic energy for atoms by this method have been difficult due to the lack of formulas expressing these matrix elements for general angular momentum in terms of overlap and potential-energy integrals. It is shown here that a strategic application of angular-momentum theory, including the use of vector spherical harmonics, enables the reduction of all atomic kinetic-energy integrals to overlap and potential-energy matrix elements. The new formulas are validated by showing that they yield correct results for a large number of integrals published by other investigators.

  2. Revisions to the JDL data fusion model

    NASA Astrophysics Data System (ADS)

    Steinberg, Alan N.; Bowman, Christopher L.; White, Franklin E.

    1999-03-01

    The Data Fusion Model maintained by the Joint Directors of Laboratories (JDL) Data Fusion Group is the most widely-used method for categorizing data fusion-related functions. This paper discusses the current effort to revise the expand this model to facilitate the cost-effective development, acquisition, integration and operation of multi- sensor/multi-source systems. Data fusion involves combining information - in the broadest sense - to estimate or predict the state of some aspect of the universe. These may be represented in terms of attributive and relational states. If the job is to estimate the state of a people, it can be useful to include consideration of informational and perceptual states in addition to the physical state. Developing cost-effective multi-source information systems requires a method for specifying data fusion processing and control functions, interfaces, and associate databases. The lack of common engineering standards for data fusion systems has been a major impediment to integration and re-use of available technology: current developments do not lend themselves to objective evaluation, comparison or re-use. This paper reports on proposed revisions and expansions of the JDL Data FUsion model to remedy some of these deficiencies. This involves broadening the functional model and related taxonomy beyond the original military focus, and integrating the Data Fusion Tree Architecture model for system description, design and development.

  3. Tracking the development of brain connectivity in adolescence through a fast Bayesian integrative method

    NASA Astrophysics Data System (ADS)

    Zhang, Aiying; Jia, Bochao; Wang, Yu-Ping

    2018-03-01

    Adolescence is a transitional period between childhood and adulthood with physical changes, as well as increasing emotional activity. Studies have shown that the emotional sensitivity is related to a second dramatical brain growth. However, there is little focus on the trend of brain development during this period. In this paper, we aim to track the functional brain connectivity development in adolescence using resting state fMRI (rs-fMRI), which amounts to a time-series analysis problem. Most existing methods either require the time point to be fairly long or are only applicable to small graphs. To this end, we adapted a fast Bayesian integrative analysis (FBIA) to address the short time-series difficulty, and combined with adaptive sum of powered score (aSPU) test for group difference. The data we used are the resting state fMRI (rs-fMRI) obtained from the publicly available Philadelphia Neurodevelopmental Cohort (PNC). They include 861 individuals aged 8-22 years who were divided into five different adolescent stages. We summarized the networks with global measurements: segregation and integration, and provided full brain functional connectivity pattern in various stages of adolescence. Moreover, our research revealed several brain functional modules development trends. Our results are shown to be both statistically and biologically significant.

  4. An Alternate Set of Basis Functions for the Electromagnetic Solution of Arbitrarily-Shaped, Three-Dimensional, Closed, Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.

  5. PROPOSED SIAM PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAILEY, DAVID H.; BORWEIN, JONATHAN M.

    A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less

  6. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    PubMed

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  7. Particle connectedness and cluster formation in sequential depositions of particles: Integral-equation theory

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu; Glandt, Eduardo D.

    2004-11-01

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  8. A Pilot Study of Social Competence Group Training for Adolescents with Borderline Intellectual Functioning and Emotional and Behavioural Problems (SCT-ABI)

    ERIC Educational Resources Information Center

    Nestler, J.; Goldbeck, L.

    2011-01-01

    Background: Emotional and behavioural problems as well as a lack of social competence are common in adolescents with borderline intellectual functioning and impair their social and vocational integration. Group interventions specifically developed for this target group are scarce and controlled evaluation studies are absent. Methods: A…

  9. A Discussion on the Substitution Method for Trigonometric Rational Functions

    ERIC Educational Resources Information Center

    Ponce-Campuzano, Juan Carlos; Rivera-Figueroa, Antonio

    2011-01-01

    It is common to see, in the books on calculus, primitives of functions (some authors use the word "antiderivative" instead of primitive). However, the majority of authors pay scant attention to the domains over which the primitives are valid, which could lead to errors in the evaluation of definite integrals. In the teaching of calculus, in…

  10. An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Collis, Peter

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…

  11. Stress estimation in reservoirs using an integrated inverse method

    NASA Astrophysics Data System (ADS)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  12. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  13. Advanced indium phosphide based monolithic integration using quantum well intermixing and MOCVD regrowth

    NASA Astrophysics Data System (ADS)

    Raring, James W.

    The proliferation of the internet has fueled the explosive growth of telecommunications over the past three decades. As a result, the demand for communication systems providing increased bandwidth and flexibility at lower cost continues to rise. Lightwave communication systems meet these demands. The integration of multiple optoelectronic components onto a single chip could revolutionize the photonics industry. Photonic integrated circuits (PIC) provide the potential for cost reduction, decreased loss, decreased power consumption, and drastic space savings over conventional fiber optic communication systems comprised of discrete components. For optimal performance, each component within the PIC may require a unique epitaxial layer structure, band-gap energy, and/or waveguide architecture. Conventional integration methods facilitating such flexibility are increasingly complex and often result in decreased device yield, driving fabrication costs upward. It is this trade-off between performance and device yield that has hindered the scaling of photonic circuits. This dissertation presents high-functionality PICs operating at 10 and 40 Gb/s fabricated using novel integration technologies based on a robust quantum-well-intermixing (QWI) method and metal organic chemical vapor deposition (MOCVD) regrowth. We optimize the QWI process for the integration of high-performance quantum well electroabsorption modulators (QW-EAM) with sampled-grating (SG) DBR lasers to demonstrate the first widely-tunable negative chirp 10 and 40 Gb/s EAM based transmitters. Alone, QWI does not afford the integration of high-performance semiconductor optical amplifiers (SOA) and photodetectors with the transmitters. To overcome this limitation, we have developed a novel high-flexibility integration scheme combining MOCVD regrowth with QWI to merge low optical confinement factor SOAs and 40 Gb/s uni-traveling carrier (UTC) photodiodes on the same chip as the QW-EAM based transmitters. These high-saturation power receiver structures represent the state-of-the-art technologies for even discrete components. Using the novel integration technology, we present the first widely-tunable single-chip device capable of transmit and receive functionality at 40 Gb/s. This device monolithically integrates tunable lasers, EAMs, SOAs, and photodetectors with performance that rivals optimized discrete components. The high-flexibility integration scheme requires only simple blanket regrowth steps and thus breaks the performance versus yield trade-off plaguing conventional fabrication techniques employed for high-functionality PICs.

  14. Traveling wave and exact solutions for the perturbed nonlinear Schrödinger equation with Kerr law nonlinearity

    NASA Astrophysics Data System (ADS)

    Akram, Ghazala; Mahak, Nadia

    2018-06-01

    The nonlinear Schrödinger equation (NLSE) with the aid of three order dispersion terms is investigated to find the exact solutions via the extended (G'/G2)-expansion method and the first integral method. Many exact traveling wave solutions, such as trigonometric, hyperbolic, rational, soliton and complex function solutions, are characterized with some free parameters of the problem studied. It is corroborated that the proposed techniques are manageable, straightforward and powerful tools to find the exact solutions of nonlinear partial differential equations (PDEs). Some figures are plotted to describe the propagation of traveling wave solutions expressed by the hyperbolic functions, trigonometric functions and rational functions.

  15. Novel approach for tomographic reconstruction of gas concentration distributions in air: Use of smooth basis functions and simulated annealing

    NASA Astrophysics Data System (ADS)

    Drescher, A. C.; Gadgil, A. J.; Price, P. N.; Nazaroff, W. W.

    Optical remote sensing and iterative computed tomography (CT) can be applied to measure the spatial distribution of gaseous pollutant concentrations. We conducted chamber experiments to test this combination of techniques using an open path Fourier transform infrared spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). Although ART converged to solutions that showed excellent agreement with the measured ray-integral concentrations, the solutions were inconsistent with simultaneously gathered point-sample concentration measurements. A new CT method was developed that combines (1) the superposition of bivariate Gaussians to represent the concentration distribution and (2) a simulated annealing minimization routine to find the parameters of the Gaussian basis functions that result in the best fit to the ray-integral concentration data. This method, named smooth basis function minimization (SBFM), generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present an analysis of two sets of experimental data that compares the performance of ART and SBFM. We conclude that SBFM is a superior CT reconstruction method for practical indoor and outdoor air monitoring applications.

  16. Assessment of the integration capability of system architectures from a complex and distributed software systems perspective

    NASA Astrophysics Data System (ADS)

    Leuchter, S.; Reinert, F.; Müller, W.

    2014-06-01

    Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.

  17. Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.

    PubMed

    Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung

    2010-11-01

    Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.

  18. Community Landscapes: An Integrative Approach to Determine Overlapping Network Module Hierarchy, Identify Key Nodes and Predict Network Dynamics

    PubMed Central

    Kovács, István A.; Palotai, Robin; Szalay, Máté S.; Csermely, Peter

    2010-01-01

    Background Network communities help the functional organization and evolution of complex networks. However, the development of a method, which is both fast and accurate, provides modular overlaps and partitions of a heterogeneous network, has proven to be rather difficult. Methodology/Principal Findings Here we introduce the novel concept of ModuLand, an integrative method family determining overlapping network modules as hills of an influence function-based, centrality-type community landscape, and including several widely used modularization methods as special cases. As various adaptations of the method family, we developed several algorithms, which provide an efficient analysis of weighted and directed networks, and (1) determine pervasively overlapping modules with high resolution; (2) uncover a detailed hierarchical network structure allowing an efficient, zoom-in analysis of large networks; (3) allow the determination of key network nodes and (4) help to predict network dynamics. Conclusions/Significance The concept opens a wide range of possibilities to develop new approaches and applications including network routing, classification, comparison and prediction. PMID:20824084

  19. A new method for constructing analytic elements for groundwater flow.

    NASA Astrophysics Data System (ADS)

    Strack, O. D.

    2007-12-01

    The analytic element method is based upon the superposition of analytic functions that are defined throughout the infinite domain, and can be used to meet a variety of boundary conditions. Analytic elements have been use successfully for a number of problems, mainly dealing with the Poisson equation (see, e.g., Theory and Applications of the Analytic Element Method, Reviews of Geophysics, 41,2/1005 2003 by O.D.L. Strack). The majority of these analytic elements consists of functions that exhibit jumps along lines or curves. Such linear analytic elements have been developed also for other partial differential equations, e.g., the modified Helmholz equation and the heat equation, and were constructed by integrating elementary solutions, the point sink and the point doublet, along a line. This approach is limiting for two reasons. First, the existence is required of the elementary solutions, and, second, the integration tends to limit the range of solutions that can be obtained. We present a procedure for generating analytic elements that requires merely the existence of a harmonic function with the desired properties; such functions exist in abundance. The procedure to be presented is used to generalize this harmonic function in such a way that the resulting expression satisfies the applicable differential equation. The approach will be applied, along with numerical examples, for the modified Helmholz equation and for the heat equation, while it is noted that the method is in no way restricted to these equations. The procedure is carried out entirely in terms of complex variables, using Wirtinger calculus.

  20. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  1. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  2. Integration of somatic mutation, expression and functional data reveals potential driver genes predictive of breast cancer survival.

    PubMed

    Suo, Chen; Hrydziuszko, Olga; Lee, Donghwan; Pramana, Setia; Saputra, Dhany; Joshi, Himanshu; Calza, Stefano; Pawitan, Yudi

    2015-08-15

    Genome and transcriptome analyses can be used to explore cancers comprehensively, and it is increasingly common to have multiple omics data measured from each individual. Furthermore, there are rich functional data such as predicted impact of mutations on protein coding and gene/protein networks. However, integration of the complex information across the different omics and functional data is still challenging. Clinical validation, particularly based on patient outcomes such as survival, is important for assessing the relevance of the integrated information and for comparing different procedures. An analysis pipeline is built for integrating genomic and transcriptomic alterations from whole-exome and RNA sequence data and functional data from protein function prediction and gene interaction networks. The method accumulates evidence for the functional implications of mutated potential driver genes found within and across patients. A driver-gene score (DGscore) is developed to capture the cumulative effect of such genes. To contribute to the score, a gene has to be frequently mutated, with high or moderate mutational impact at protein level, exhibiting an extreme expression and functionally linked to many differentially expressed neighbors in the functional gene network. The pipeline is applied to 60 matched tumor and normal samples of the same patient from The Cancer Genome Atlas breast-cancer project. In clinical validation, patients with high DGscores have worse survival than those with low scores (P = 0.001). Furthermore, the DGscore outperforms the established expression-based signatures MammaPrint and PAM50 in predicting patient survival. In conclusion, integration of mutation, expression and functional data allows identification of clinically relevant potential driver genes in cancer. The documented pipeline including annotated sample scripts can be found in http://fafner.meb.ki.se/biostatwiki/driver-genes/. yudi.pawitan@ki.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  4. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations

    PubMed Central

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J. Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types. PMID:27472383

  5. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  6. Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.

    2006-05-01

    In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.

  7. An Empirical Orthogonal Function-Based Algorithm for Estimating Terrestrial Latent Heat Flux from Eddy Covariance, Meteorological and Satellite Observations.

    PubMed

    Feng, Fei; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Chen, Jiquan; Zhao, Xiang; Jia, Kun; Pintér, Krisztina; McCaughey, J Harry

    2016-01-01

    Accurate estimation of latent heat flux (LE) based on remote sensing data is critical in characterizing terrestrial ecosystems and modeling land surface processes. Many LE products were released during the past few decades, but their quality might not meet the requirements in terms of data consistency and estimation accuracy. Merging multiple algorithms could be an effective way to improve the quality of existing LE products. In this paper, we present a data integration method based on modified empirical orthogonal function (EOF) analysis to integrate the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product (MOD16) and the Priestley-Taylor LE algorithm of Jet Propulsion Laboratory (PT-JPL) estimate. Twenty-two eddy covariance (EC) sites with LE observation were chosen to evaluate our algorithm, showing that the proposed EOF fusion method was capable of integrating the two satellite data sets with improved consistency and reduced uncertainties. Further efforts were needed to evaluate and improve the proposed algorithm at larger spatial scales and time periods, and over different land cover types.

  8. Effectiveness of a Psychoeducational Parenting Group on Child, Parent and Family Behavior: A Pilot Study in a Family Practice Clinic with an Underserved Population

    PubMed Central

    Berge, Jerica M.; Law, David D.; Johnson, Jennifer; Wells, M. Gawain

    2013-01-01

    Background Although integrated care for adults in primary care has steadily increased over the last several decades, there remains a paucity of research regarding integrated care for children in primary care. Purpose To report results of a pilot study testing initial feasibility of a parenting psychoeducational group targeting child behavioral problems within a primary care clinic. Method The participants (n = 35) were parents representing an underserved population from an inner-city primary care clinic. Participants attended a 12-week psychoeducational parenting group and reported pre- and post-measures of family functioning, child misbehavior and dyadic functioning. Paired t-tests and effects sizes are reported. Results Participants reported statistically significant improvement in family functioning, child misbehavior and couple functioning after participating in the parenting psychoeducational group. Conclusions Results suggest initial feasibility of a parenting psychoeducational group within a primary care clinic with an underserved population. This intervention may be useful for other primary care clinics seeking to offer more integrative care options for children and their families. PMID:20939627

  9. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.

  10. The Ablowitz–Ladik system on a finite set of integers

    NASA Astrophysics Data System (ADS)

    Xia, Baoqiang

    2018-07-01

    We show how to solve initial-boundary value problems for integrable nonlinear differential–difference equations on a finite set of integers. The method we employ is the discrete analogue of the unified transform (Fokas method). The implementation of this method to the Ablowitz–Ladik system yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem, which has a jump matrix with explicit -dependence involving certain functions referred to as spectral functions. Some of these functions are defined in terms of the initial value, while the remaining spectral functions are defined in terms of two sets of boundary values. These spectral functions are not independent but satisfy an algebraic relation called global relation. We analyze the global relation to characterize the unknown boundary values in terms of the given initial and boundary values. We also discuss the linearizable boundary conditions.

  11. Adaptive Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Fasnacht, Marc

    We develop adaptive Monte Carlo methods for the calculation of the free energy as a function of a parameter of interest. The methods presented are particularly well-suited for systems with complex energy landscapes, where standard sampling techniques have difficulties. The Adaptive Histogram Method uses a biasing potential derived from histograms recorded during the simulation to achieve uniform sampling in the parameter of interest. The Adaptive Integration method directly calculates an estimate of the free energy from the average derivative of the Hamiltonian with respect to the parameter of interest and uses it as a biasing potential. We compare both methods to a state of the art method, and demonstrate that they compare favorably for the calculation of potentials of mean force of dense Lennard-Jones fluids. We use the Adaptive Integration Method to calculate accurate potentials of mean force for different types of simple particles in a Lennard-Jones fluid. Our approach allows us to separate the contributions of the solvent to the potential of mean force from the effect of the direct interaction between the particles. With contributions of the solvent determined, we can find the potential of mean force directly for any other direct interaction without additional simulations. We also test the accuracy of the Adaptive Integration Method on a thermodynamic cycle, which allows us to perform a consistency check between potentials of mean force and chemical potentials calculated using the Adaptive Integration Method. The results demonstrate a high degree of consistency of the method.

  12. Complete fourier direct magnetic resonance imaging (CFD-MRI) for diffusion MRI

    PubMed Central

    Özcan, Alpay

    2013-01-01

    The foundation for an accurate and unifying Fourier-based theory of diffusion weighted magnetic resonance imaging (DW–MRI) is constructed by carefully re-examining the first principles of DW–MRI signal formation and deriving its mathematical model from scratch. The derivations are specifically obtained for DW–MRI signal by including all of its elements (e.g., imaging gradients) using complex values. Particle methods are utilized in contrast to conventional partial differential equations approach. The signal is shown to be the Fourier transform of the joint distribution of number of the magnetic moments (at a given location at the initial time) and magnetic moment displacement integrals. In effect, the k-space is augmented by three more dimensions, corresponding to the frequency variables dual to displacement integral vectors. The joint distribution function is recovered by applying the Fourier transform to the complete high-dimensional data set. In the process, to obtain a physically meaningful real valued distribution function, phase corrections are applied for the re-establishment of Hermitian symmetry in the signal. Consequently, the method is fully unconstrained and directly presents the distribution of displacement integrals without any assumptions such as symmetry or Markovian property. The joint distribution function is visualized with isosurfaces, which describe the displacement integrals, overlaid on the distribution map of the number of magnetic moments with low mobility. The model provides an accurate description of the molecular motion measurements via DW–MRI. The improvement of the characterization of tissue microstructure leads to a better localization, detection and assessment of biological properties such as white matter integrity. The results are demonstrated on the experimental data obtained from an ex vivo baboon brain. PMID:23596401

  13. What is the impact of shift work on the psychological functioning and resilience of nurses? An integrative review.

    PubMed

    Tahghighi, Mozhdeh; Rees, Clare S; Brown, Janie A; Breen, Lauren J; Hegney, Desley

    2017-09-01

    To synthesize existing research to determine if nurses who work shifts have poorer psychological functioning and resilience than nurses who do not work shifts. Research exploring the impact of shift work on the psychological functioning and resilience of nurses is limited compared with research investigating the impact of shifts on physical outcomes. Integrative literature review. Relevant databases were searched from January 1995-August 2016 using the combination of keywords: nurse, shift work; rotating roster; night shift; resilient; hardiness; coping; well-being; burnout; mental health; occupational stress; compassion fatigue; compassion satisfaction; stress; anxiety; depression. Two authors independently performed the integrative review processes proposed by Whittemore and Knafl and a quality assessment using the mixed-methods appraisal tool by Pluye et al. A total of 37 articles were included in the review (32 quantitative, 4 qualitative and 1 mixed-methods). Approximately half of the studies directly compared nurse shift workers with non-shift workers. Findings were grouped according to the following main outcomes: (1) general psychological well-being/quality of life; (2) Job satisfaction/burnout; (3) Depression, anxiety and stress; and (4) Resilience/coping. We did not find definitive evidence that shift work is associated with poorer psychological functioning in nurses. Overall, the findings suggest that the impact of shift work on nurse psychological functioning is dependent on several contextual and individual factors. More studies are required which directly compare the psychological outcomes and resilience of nurse shift workers with non-shift workers. © 2017 John Wiley & Sons Ltd.

  14. New secure communication-layer standard for medical image management (ISCL)

    NASA Astrophysics Data System (ADS)

    Kita, Kouichi; Nohara, Takashi; Hosoba, Minoru; Yachida, Masuyoshi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    1999-07-01

    This paper introduces a summary of the standard draft of ISCL 1.00 which will be published by MEDIS-DC officially. ISCL is abbreviation of Integrated Secure Communication Layer Protocols for Secure Medical Image Management Systems. ISCL is a security layer which manages security function between presentation layer and TCP/IP layer. ISCL mechanism depends on basic function of a smart IC card and symmetric secret key mechanism. A symmetry key for each session is made by internal authentication function of a smart IC card with a random number. ISCL has three functions which assure authentication, confidently and integrity. Entity authentication process is done through 3 path 4 way method using functions of internal authentication and external authentication of a smart iC card. Confidentially algorithm and MAC algorithm for integrity are able to be selected. ISCL protocols are communicating through Message Block which consists of Message Header and Message Data. ISCL protocols are evaluating by applying to regional collaboration system for image diagnosis, and On-line Secure Electronic Storage system for medical images. These projects are supported by Medical Information System Development Center. These project shows ISCL is useful to keep security.

  15. Nacelle Integration to Reduce the Sonic Boom of Aircraft Designed to Cruise at Supersonic Speeds

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    1999-01-01

    An empirical method for integrating the engine nacelles on a wing-fuselage-fin(s) configuration has been described. This method is based on Whitham theory and Seebass and George sonic-boom minimization theory, With it, both reduced sonic-boom as well as high aerodynamic efficiency methods can be applied to the conceptual design of a supersonic-cruise aircraft. Two high-speed civil transport concepts were used as examples to illustrate the application of this engine-nacelle integration methodology: (1) a concept with engine nacelles mounted on the aft-fuselage, the HSCT-1OB; and (2) a concept with engine nacelles mounted under an extended-wing center section, the HSCT-11E. In both cases, the key to a significant reduction in the sonic-boom contribution from the engine nacelles was to use the F-function shape of the concept as a guide to move the nacelles further aft on the configuration.

  16. Isolation of rat adrenocortical mitochondria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solinas, Paola; Department of Medicine, Center for Mitochondrial Disease, School of Medicine, Case Western Reserve University, Cleveland, OH 44106; Fujioka, Hisashi

    2012-10-12

    Highlights: Black-Right-Pointing-Pointer A method for isolation of adrenocortical mitochondria from the adrenal gland of rats is described. Black-Right-Pointing-Pointer The purified isolated mitochondria show excellent morphological integrity. Black-Right-Pointing-Pointer The properties of oxidative phosphorylation are excellent. Black-Right-Pointing-Pointer The method increases the opportunity of direct analysis of adrenal mitochondria from small animals. -- Abstract: This report describes a relatively simple and reliable method for isolating adrenocortical mitochondria from rats in good, reasonably pure yield. These organelles, which heretofore have been unobtainable in isolated form from small laboratory animals, are now readily accessible. A high degree of mitochondrial purity is shown by the electronmore » micrographs, as well as the structural integrity of each mitochondrion. That these organelles have retained their functional integrity is shown by their high respiratory control ratios. In general, the biochemical performance of these adrenal cortical mitochondria closely mirrors that of typical hepatic or cardiac mitochondria.« less

  17. An integrative multi-criteria decision making techniques for supplier evaluation problem with its application

    NASA Astrophysics Data System (ADS)

    Fatrias, D.; Kamil, I.; Meilani, D.

    2018-03-01

    Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.

  18. Recent developments of functional magnetic resonance imaging research for drug development in Alzheimer's disease.

    PubMed

    Hampel, Harald; Prvulovic, David; Teipel, Stefan J; Bokde, Arun L W

    2011-12-01

    The objective of this review is to evaluate recent advances in functional magnetic resonance imaging (fMRI) research in Alzheimer's disease for the development of therapeutic agents. The basic building block underpinning cognition is a brain network. The measured brain activity serves as an integrator of the various components, from genes to structural integrity, that impact the function of networks underpinning cognition. Specific networks can be interrogated using cognitive paradigms such as a learning task or a working memory task. In addition, recent advances in our understanding of neural networks allow one to investigate the function of a brain network by investigating the inherent coherency of the brain networks that can be measured during resting state. The coherent resting state networks allow testing in cognitively impaired patients that may not be possible with the use of cognitive paradigms. In particular the default mode network (DMN) includes the medial temporal lobe and posterior cingulate, two key regions that support episodic memory function and are impaired in the earliest stages of Alzheimer's disease (AD). By investigating the effects of a prospective drug compound on this network, it could illuminate the specificity of the compound with a network supporting memory function. This could provide valuable information on the methods of action at physiological and behaviourally relevant levels. Utilizing fMRI opens up new areas of research and a new approach for drug development, as it is an integrative tool to investigate entire networks within the brain. The network based approach provides a new independent method from previous ones to translate preclinical knowledge into the clinical domain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Mathematical Methods for Optical Physics and Engineering

    NASA Astrophysics Data System (ADS)

    Gbur, Gregory J.

    2011-01-01

    1. Vector algebra; 2. Vector calculus; 3. Vector calculus in curvilinear coordinate systems; 4. Matrices and linear algebra; 5. Advanced matrix techniques and tensors; 6. Distributions; 7. Infinite series; 8. Fourier series; 9. Complex analysis; 10. Advanced complex analysis; 11. Fourier transforms; 12. Other integral transforms; 13. Discrete transforms; 14. Ordinary differential equations; 15. Partial differential equations; 16. Bessel functions; 17. Legendre functions and spherical harmonics; 18. Orthogonal functions; 19. Green's functions; 20. The calculus of variations; 21. Asymptotic techniques; Appendices; References; Index.

  20. Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations

    NASA Technical Reports Server (NTRS)

    Brown, R. L.

    1978-01-01

    Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.

  1. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  2. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  3. MANTECH project book

    NASA Astrophysics Data System (ADS)

    The effective integration of processes, systems, and procedures used in the production of aerospace systems using computer technology is managed by the Integration Technology Division (MTI). Under its auspices are the Information Management Branch, which is actively involved with information management, information sciences and integration, and the Implementation Branch, whose technology areas include computer integrated manufacturing, engineering design, operations research, and material handling and assembly. The Integration Technology Division combines design, manufacturing, and supportability functions within the same organization. The Processing and Fabrication Division manages programs to improve structural and nonstructural materials processing and fabrication. Within this division, the Metals Branch directs the manufacturing methods program for metals and metal matrix composites processing and fabrication. The Nonmetals Branch directs the manufacturing methods programs, which include all manufacturing processes for producing and utilizing propellants, plastics, resins, fibers, composites, fluid elastomers, ceramics, glasses, and coatings. The objective of the Industrial Base Analysis Division is to act as focal point for the USAF industrial base program for productivity, responsiveness, and preparedness planning.

  4. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    NASA Astrophysics Data System (ADS)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  5. Compact electrochemical sensor system and method for field testing for metals in saliva or other fluids

    DOEpatents

    Lin, Yuehe; Bennett, Wendy D.; Timchalk, Charles; Thrall, Karla D.

    2004-03-02

    Microanalytical systems based on a microfluidics/electrochemical detection scheme are described. Individual modules, such as microfabricated piezoelectrically actuated pumps and a microelectrochemical cell were integrated onto portable platforms. This allowed rapid change-out and repair of individual components by incorporating "plug and play" concepts now standard in PC's. Different integration schemes were used for construction of the microanalytical systems based on microfluidics/electrochemical detection. In one scheme, all individual modules were integrated in the surface of the standard microfluidic platform based on a plug-and-play design. Microelectrochemical flow cell which integrated three electrodes based on a wall-jet design was fabricated on polymer substrate. The microelectrochemical flow cell was then plugged directly into the microfluidic platform. Another integration scheme was based on a multilayer lamination method utilizing stacking modules with different functionality to achieve a compact microanalytical device. Application of the microanalytical system for detection of lead in, for example, river water and saliva samples using stripping voltammetry is described.

  6. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  7. Diagrammatic expansion for positive spectral functions beyond GW: Application to vertex corrections in the electron gas

    NASA Astrophysics Data System (ADS)

    Stefanucci, G.; Pavlyukh, Y.; Uimonen, A.-M.; van Leeuwen, R.

    2014-09-01

    We present a diagrammatic approach to construct self-energy approximations within many-body perturbation theory with positive spectral properties. The method cures the problem of negative spectral functions which arises from a straightforward inclusion of vertex diagrams beyond the GW approximation. Our approach consists of a two-step procedure: We first express the approximate many-body self-energy as a product of half-diagrams and then identify the minimal number of half-diagrams to add in order to form a perfect square. The resulting self-energy is an unconventional sum of self-energy diagrams in which the internal lines of half a diagram are time-ordered Green's functions, whereas those of the other half are anti-time-ordered Green's functions, and the lines joining the two halves are either lesser or greater Green's functions. The theory is developed using noninteracting Green's functions and subsequently extended to self-consistent Green's functions. Issues related to the conserving properties of diagrammatic approximations with positive spectral functions are also addressed. As a major application of the formalism we derive the minimal set of additional diagrams to make positive the spectral function of the GW approximation with lowest-order vertex corrections and screened interactions. The method is then applied to vertex corrections in the three-dimensional homogeneous electron gas by using a combination of analytical frequency integrations and numerical Monte Carlo momentum integrations to evaluate the diagrams.

  8. Study of magnetic nanoparticles and overcoatings for biological applications including a sensor device

    NASA Astrophysics Data System (ADS)

    Grancharov, Stephanie G.

    I. A general introduction to the field of nanomaterials is presented, highlighting their special attributes and characteristics. Nanoparticles in general are discussed with respect to their structure, form and properties. Magnetic particles in particular are highlighted, especially the iron oxides. The importance and interest of integrating these materials with biological media is discussed, with emphasis on transferring particles from one medium to another, and subsequent modification of surfaces with different types of materials. II. A general route to making magnetic iron oxide nanoparticles is explained, both as maghemite and magnetite, including properties of the particles and characterization. A novel method of producing magnetite particles without a ligand is then presented, with subsequent characterization and properties described. III. Attempts to coat iron oxide nanoparticles with a view to creating biofunctional magnetic nanoparticles are presented, using a gold overcoating method. Methods of synthesis and characterization are examined, with unique problems to core-shell structures analyzed. IV. Solubility of nanoparticles in both aqueous and organic media is discussed and examined. The subsequent functionalization of the surface of maghemite and magnetite nanoparticles with a variety of biomaterials including block copolypeptides, phospholipids and carboxydextran is then presented. These methods are integral to the use of magnetic nanoparticles in biological applications, and therefore their properties are examined once tailored with these molecules. V. A new type of magnetic nanoparticle sensor-type device is described. This device integrates bio-and DNA-functionalized nanoparticles with conjugate functionalized silicon dioxide surfaces. These techniques to pattern particles to a surface are then incorporated into a device with a magnetic tunnel junction, which measures magnetoresistance in the presence of an external magnetic field. This configuration thereby introduces a new way to detect magnetic nanoparticles via their magnetic properties after conjugation via biological entities.

  9. From expert-derived user needs to user-perceived ease of use and usefulness: a two-phase mixed-methods evaluation framework.

    PubMed

    Boland, Mary Regina; Rusanov, Alexander; So, Yat; Lopez-Jimenez, Carlos; Busacca, Linda; Steinman, Richard C; Bakken, Suzanne; Bigger, J Thomas; Weng, Chunhua

    2014-12-01

    Underspecified user needs and frequent lack of a gold standard reference are typical barriers to technology evaluation. To address this problem, this paper presents a two-phase evaluation framework involving usability experts (phase 1) and end-users (phase 2). In phase 1, a cross-system functionality alignment between expert-derived user needs and system functions was performed to inform the choice of "the best available" comparison system to enable a cognitive walkthrough in phase 1 and a comparative effectiveness evaluation in phase 2. During phase 2, five quantitative and qualitative evaluation methods are mixed to assess usability: time-motion analysis, software log, questionnaires - System Usability Scale and the Unified Theory of Acceptance of Use of Technology, think-aloud protocols, and unstructured interviews. Each method contributes data for a unique measure (e.g., time motion analysis contributes task-completion-time; software log contributes action transition frequency). The measures are triangulated to yield complementary insights regarding user-perceived ease-of-use, functionality integration, anxiety during use, and workflow impact. To illustrate its use, we applied this framework in a formative evaluation of a software called Integrated Model for Patient Care and Clinical Trials (IMPACT). We conclude that this mixed-methods evaluation framework enables an integrated assessment of user needs satisfaction and user-perceived usefulness and usability of a novel design. This evaluation framework effectively bridges the gap between co-evolving user needs and technology designs during iterative prototyping and is particularly useful when it is difficult for users to articulate their needs for technology support due to the lack of a baseline. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  11. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  12. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2016-12-21

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  13. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  14. Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions

    NASA Technical Reports Server (NTRS)

    Pilon, Anthony R.; Lyrintzis, Anastasios S.

    1997-01-01

    The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.

  15. Bio-inspired cryo-ink preserves red blood cell phenotype and function during nanoliter vitrification.

    PubMed

    El Assal, Rami; Guven, Sinan; Gurkan, Umut Atakan; Gozen, Irep; Shafiee, Hadi; Dalbeyler, Sedef; Abdalla, Noor; Thomas, Gawain; Fuld, Wendy; Illigens, Ben M W; Estanislau, Jessica; Khoory, Joseph; Kaufman, Richard; Zylberberg, Claudia; Lindeman, Neal; Wen, Qi; Ghiran, Ionita; Demirci, Utkan

    2014-09-03

    Current red-blood-cell cryopreservation methods utilize bulk volumes, causing cryo-injury of cells, which results in irreversible disruption of cell morphology, mechanics, and function. An innovative approach to preserve human red-blood-cell morphology, mechanics, and function following vitrification in nanoliter volumes is developed using a novel cryo-ink integrated with a bioprinting approach. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Accurate computation of gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.

  17. The structural and functional brain networks that support human social networks.

    PubMed

    Noonan, M P; Mars, R B; Sallet, J; Dunbar, R I M; Fellows, L K

    2018-02-20

    Social skills rely on a specific set of cognitive processes, raising the possibility that individual differences in social networks are related to differences in specific brain structural and functional networks. Here, we tested this hypothesis with multimodality neuroimaging. With diffusion MRI (DMRI), we showed that differences in structural integrity of particular white matter (WM) tracts, including cingulum bundle, extreme capsule and arcuate fasciculus were associated with an individual's social network size (SNS). A voxel-based morphology analysis demonstrated correlations between gray matter (GM) volume and SNS in limbic and temporal lobe regions. These structural changes co-occured with functional network differences. As a function of SNS, dorsomedial and dorsolateral prefrontal cortex showed altered resting-state functional connectivity with the default mode network (DMN). Finally, we integrated these three complementary methods, interrogating the relationship between social GM clusters and specific WM and resting-state networks (RSNs). Probabilistic tractography seeded in these GM nodes utilized the SNS-related WM pathways. Further, the spatial and functional overlap between the social GM clusters and the DMN was significantly closer than other control RSNs. These integrative analyses provide convergent evidence of the role of specific circuits in SNS, likely supporting the adaptive behavior necessary for success in extensive social environments. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  18. M-Finder: Uncovering functionally associated proteins from interactome data integrated with GO annotations

    PubMed Central

    2013-01-01

    Background Protein-protein interactions (PPIs) play a key role in understanding the mechanisms of cellular processes. The availability of interactome data has catalyzed the development of computational approaches to elucidate functional behaviors of proteins on a system level. Gene Ontology (GO) and its annotations are a significant resource for functional characterization of proteins. Because of wide coverage, GO data have often been adopted as a benchmark for protein function prediction on the genomic scale. Results We propose a computational approach, called M-Finder, for functional association pattern mining. This method employs semantic analytics to integrate the genome-wide PPIs with GO data. We also introduce an interactive web application tool that visualizes a functional association network linked to a protein specified by a user. The proposed approach comprises two major components. First, the PPIs that have been generated by high-throughput methods are weighted in terms of their functional consistency using GO and its annotations. We assess two advanced semantic similarity metrics which quantify the functional association level of each interacting protein pair. We demonstrate that these measures outperform the other existing methods by evaluating their agreement to other biological features, such as sequence similarity, the presence of common Pfam domains, and core PPIs. Second, the information flow-based algorithm is employed to discover a set of proteins functionally associated with the protein in a query and their links efficiently. This algorithm reconstructs a functional association network of the query protein. The output network size can be flexibly determined by parameters. Conclusions M-Finder provides a useful framework to investigate functional association patterns with any protein. This software will also allow users to perform further systematic analysis of a set of proteins for any specific function. It is available online at http://bionet.ecs.baylor.edu/mfinder PMID:24565382

  19. Accelerating large scale Kohn-Sham density functional theory calculations with semi-local functionals and hybrid functionals

    NASA Astrophysics Data System (ADS)

    Lin, Lin

    The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.

  20. Course 4: Density Functional Theory, Methods, Techniques, and Applications

    NASA Astrophysics Data System (ADS)

    Chrétien, S.; Salahub, D. R.

    Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions

Top